top of page

ABOUT NORCOM

Deep learning infrastructure for autonomous driving

The task

In order to enable autonomous driving, the car must perceive its surroundings without errors and make appropriate decisions based on them. From a technical point of view, the perception of the car results from the real-time fusion of data from various sensors installed in the car. The decision is made by the car with the help of algorithms based on neural networks. These neural networks need to be developed and trained to avoid mistakes.

The challenge
The IT infrastructure for training neural networks must, on the one hand, process large amounts of data (in the terabyte range) and, on the other hand, train neural networks with reproducible workflows by distributed development teams.

our solution
DaSense creates a deep learning environment for this, which brings together data and analyzes and enables the automated training of neural networks. The DaSense Deep Learning module packages the complex deep learning workflow into a few, defined steps. This leads to a simple and fast workflow without any breaks.

First of all, a new neural network is developed locally for the specific application and tested with little data to ensure that it can run correctly. Together with the necessary training parameters, an experiment is created from this and packaged in a deep learning app. The Deep Learning App sends the training code to all available clusters, worldwide if required. This enables the training courses to be carried out in parallel - locally or in the cloud. Finally, the results are collected, evaluated and the model improved accordingly. A new training cycle can begin.

The user can track the status and execution times of deep learning jobs, resource usage and the intelligent distribution of the workload online at any time and intervene in the process at any time.

 

The customer benefit

Neural networks are developed, trained and verified on mass data much faster. Operational downtimes are reduced to a minimum.

Project-

Characteristics

Our role

  • Support of the customer by data scientists, data engineers, software developers, architects in the Scrum process

Our activities

  • Conception of the big data architecture

  • Setting up big data workflows for converting and analyzing data on Dev / Prod

  • Development of a deep learning platform for dev / prod

Technologies & methods

  • Applications: DaSense, Grafana, ROS, Jira, Confluence

  • Data / databases: HDFS, Hbase, MySQL, Hive rosbg, matlab, HDF5

  • Languages / Frameworks: Python (Anaconda Stack), Java, Javascript, Hadoop / MapR / AWS, Spark, Yarn, Oozie, Docker Swarm, Mesos, Kubernetes, Caffe, Tensorflow, Kerberos,

  • Methods: Job Scheduling, Data / Model Management, CI / CD

bottom of page