Production systems have an increasingly high level of expectation for reliability and robustness, which is challenging to achieve on continuous basis, leading more and more companies to use artificial intelligence wherever possible to ease the strain on labour and finances.
Recent advances in deep learning have enabled complex robotic tasks like perception, navigation and manipulation with promising results. These developments have attracted companies that rely on warehousing, like Amazon, to create autonomous robotic solutions in their giant fulfilment centers, enabling them to automate customer order packaging and shipping.
Amazon in particular has become well-known for their artificially intelligent technologies, since acquiring Kiva Systems in 2012, who specialise in robotic warehouses, they’ve revolutionized their working environment by introducing a large fleet of robots that work collaboratively with human employees. The online marketplace seeks to continue to make their warehouses and delivery process more efficient in the future, through the use of drones and an increase in the amount of work delegated to machines.
Abdelrahman Ogail, a Software Development Engineer in the core machine learning team at Amazon Robotics, working on deep learning techniques and software architecture involved in implementing these autonomous robotic tasks at Amazon’s fulfillment centers. I spoke to Abdelrahman to learn more.
Please tell us about your role at Amazon.
I’m a Software Development Engineer in the core machine learning team at Amazon Robotics. Before that I was a Software Engineer in the Microsoft Azure team, while pursuing a Masters degree from the University of Washington where I was applying reinforcement learning to autonomous agents for nondeterministic stochastic planning and scheduling tasks.
What started your work in deep learning?
I started using reinforcement learning with case-based planning for strategic and tactical planning for autonomous adaptive agents. Later on, I started looking into deep learning upon combining it with reinforcement learning in deep reinforcement learning literature. At Amazon Robotics, the first time I used deep learning was for image classification tasks in robotic perception.
What are the key factors that have enabled recent advancements in robotics?
Deep learning (aka neural networks in the 90s and cybernetics in the mid 70s) and it’s fundamental ideas (like backpropagation, gradient descent, etc.) have been known for a long time. The recent breakthroughs for deep learning were thanks to the availability of large training datasets, powerful computing hardware, and development of numerical computation and deep learning frameworks like TensorFlow, Caffe, MxNet and recently Caffe2 by Facebook. Advances in deep learning have paved the way for several computer vision tasks, resulting in better robotic vision and perception accuracy. State-of-the-art image and object recognition techniques can tackle common perception tasks, such as car detection for self-driving cars. On the manipulation and grasping end, the introduction of deep reinforcement learning opened a new horizon of possibility in that area, with recent research showing that robots can learn how to open doors in 2.5 hours of training only!
What challenges do you think will be most interesting for researchers & scientists in the next few years?
Incorporating deep learning in automation and having intelligent robots work side by side with humans in a production environment is very challenging and tricky from a robustness and safety standpoint. It’s not realistic to release a self-driving car that has a pedestrian detector with 99% accuracy, as it means, out of 100 pedestrians, the self-driving car could miss detecting one pedestrian.
Achieving complete, standalone, and end-to-end machine learning based robotic solutions, that can interact with humans, requires completely accurate, robust, and stable algorithms that almost won’t miss once. This is one of the most interesting and challenging tasks in the field.
Which industries do you think will be shaped most by machine learning in the years to come?
The software and tech industry is rapidly evolving nowadays, moving away from hard-coded and heuristic classic algorithms development. Most of tech companies realize how much value machine learning brings to the table. This has resulted in the formation and hiring of complete teams and organizations, to enable decision makers and software engineers the ability to leverage machine learning in their day to day tasks.
What do you see in the future for Amazon Robotics?
I see the Amazon Robotics ship steering in the direction of designing fully automated fulfilment centers, and continuing to advance research and development in the field of autonomous and intelligent production robotics. Amazon Robotics deploys robots at scale, with an ordinary fulfillment center holding more than 15,000 robots, all moving around on the floors. Amazon Robotics is a customer centric company and aims to minimize the cost and time of Amazon.com customers order packaging and shipping, and will continue to evolve to enable this.
Learn more about the latest trends and advancements in AI at the Deep Learning Summit in Boston on 25–26 May, taking place alongside the annual Deep Learning in Healthcare Summit.
Tickets are now limited for this event. Book your place now.
Can’t make it to Boston? Join us at Deep Learning Summits in London and Montreal. View all upcoming events here.
Opinions expressed in this interview may not represent the views of RE•WORK. As a result some opinions may even go against the views of RE•WORK but are posted in order to encourage debate and well-rounded knowledge sharing, and to allow alternate views to be presented to our community.