RNET has recently been awarded a Department of Energy Phase II SBIR to develop a Machine Learning Based Data Compression (MLDC) algorithm for numeric simulation data.
RNET has recently been awarded a NASA Phase I SBIR to develop a Rapid Data Analytics Platform using machine learning uses a novel dimensional reduction algorithm.
RNET has recently been awarded an NIH Phase II SBIR to design and develop Machine Learning tools to help Pathologists overcome the limitations of current computing hardware to design more accurate deep learning models for use in clinical diagnostics. These models will be able to analyze very large digitized images of glass slides (i.e., Whole Slide Images) to aid pathologists in tasks like cancer detection. The ability to analyze these images in their entirety instead of in small parts will improve the diagnostic accuracy of models and will accelerate algorithm development efforts.
We are in need of a highly skilled computer scientist to perform research and development in the area of High Performance Computing and/or Big Data systems. The candidate should desire to work on research and development of advanced HPC software including the optimization of large scale numerical simulations, machine learning, and graph analytics codes for emerging high performance compute architectures and future exascale systems (including multi-core, many-core, and GPU based platforms) and/or the development of tools to improve the usability of these codes and systems.
An immediate position is available for a computer science graduate (M.S. degree required, but Ph.D. preferred) to work on existing projects. The candidate is expected to have strong analytical, problem solving, multi-tasking and teamwork skills, and be able to develop ideas for future research projects in the HPC and Big Data fields. The desired candidate needs excellent written and oral communication skills in order to periodically collaborate in writing research proposals to government agencies, national laboratories and commercial partners. Knowledge, good understanding and experience with HPC or Big Data systems and their performance and scalability requirements is required. Particularly, strong understanding of one or more of the following aspects in distributed systems is required:
As such, the candidate should have strong experience with programming languages, tools and libraries in HPC or Big Data systems, including one or more of the following:
It is also beneficial if the candidate demonstrates experience with one or more of the following: Hadoop tools, commercial storage systems, databases and filesystems, RDMA / InfiniBand.
Please send copies of your curriculum vitae to This email address is being protected from spambots. You need JavaScript enabled to view it..
We are in need for a highly-skilled software developer for High Performance Computing and Big Data related project. The ideal candidate for this position should have several years of experience with software engineering and needs to develop software architecture for large scale numerical simulations, graph analytics, machine learning, and/or Big Data (e.g., Hadoop). A successful candidate will be able to coordinate with partners to ensure proper design and integration. The intended candidate should have the following:
Any of the following skills is considered an asset:
Please send copies of your curriculum vitae to This email address is being protected from spambots. You need JavaScript enabled to view it..