Robots are getting smarter and more complex, with rapid progress in mechatronics, machine learning, and artificial intelligence (AI). Robots are learning to perceive better, do nimble tasks skillfully, and interact effectively with people around them. The advent of collaborative robots that can work alongside humans without cages is opening up new paradigms for teaching new skills to robots through demonstration. These transformational developments are giving rise to new business opportunities in the space of flexible assembly lines, smarter automated warehouses and distribution centers, as well as intelligent retail stores. New business models, such as Robotics-as-a-Service (RaaS), are emerging with the ability to scale up and down on demand, in place of a CAPEX-oriented acquisition of such systems.
The largest e-commerce businesses are delivering millions of packages a day. However, there are periodic reports about poor working conditions at fulfilment centers, some even decrying the fact that humans are being worked like robots. There are other jobs where humans can profit from a helping hand; truck loaders and baggage handlers everywhere are prone to musculoskeletal problems, apart from accidents and other hazards.
These jobs are not going away, as we demand that more goods be delivered at our doorsteps in a matter of hours. The market is opening up for robots. According to current market research1,2, the global industrial robotics market will increase from about $40 billion to $70 billion during 2016–2023 at a compound annual growth rate (CAGR) of about 10%. The collaborative robot (cobot) market is also increasing—at a rate of about 56% (CAGR)—and will reach about $4 billion by 2023, indicating aggressive adoption of cobots.3 With these emerging trends, we at TCS Research have focused on the following specific areas related to robotics:
Robots for warehouses and shop floors
Automated pick-and-place robots for e-commerce fulfillment centers: Based on the order list received from a user, this robot is capable of identifying and picking the items directly from the rack and putting them into the packing box, which is then shipped to the user. Some of the core technologies developed for this project include a deep learning-based system for recognizing and segmenting thousands of stock-keeping units (SKUs) in a clutter, computing
necessary motion trajectories for robot end-effectors to reach targets while avoiding obstacles en route, and space optimization within shipping boxes. Alternatively, several e-tailers use a highly autonomous and dense goods-topicker model.4 The process starts when the robots send item totes to picking stations where autonomous robots pick and pack the final order by undertaking semantic and/or instance segmentation, optimal grasping, and final packing into shipping boxes.
Automated Truck Loose-Loading Systems (ATLS): We explore the automation of manual processes involved in loose loading or unloading trucks and palletizing and de-palletizing goods by using robots in a closed-loop operation. Some of the challenges include identifying the size and shape of packets in a cluttered environment and poor illumination. High throughput is essential while ensuring closed-space packing with minimum failures. Dealing with dynamic and unstructured environments is a key requirement. The ATLS solves a huge problem: efficient utilization of transport by dead-pile or loose loading (95%), as compared to standard pallets (60%), provided the robotic loading and unloading matches forklift-driven operations speed.
Fleet Management Systems (FMS) for Autonomous Mobile Robots (AMR): We are looking into problems of Multi-Robot Task Allocation (MRTA) and Multi-Agent Path Finding (MAPF). Significant costs are incurred over internal transport, which carry parcels, parts, and totes within factories, terminals, and fulfillment centers. The task allocation problem is solved by using an auction-based system with dependencies, where each agent bids for tasks and the one with the best bid gets the task. The bidding mechanism is based on a utility function, one for each agent that best describes its ability to execute the task at hand. The second problem (MAPF) aims at finding paths for all agents that minimize deadlock and livelock situations and avoid collision with other agents. These processes will be further extended for distributing work between robot- and human-driven machines working together in shops and warehouses. Further extensions done by our academic partners will provide the ability for more than one robot to collaborate and manoeuver large items in constrained spaces, as a team.
Smart factories: Smart factories represent digitalization, reconfigurability, and agility of assembly lines, mass customization higher Overall Equipment Efficiency (OEE), and deep ecosystem linkages across Original Equipment Manufacturers (OEMs), suppliers, and customers. Automation and robotics play a key role in smart factories. Rather than fixing openloop pick places, painting, welding, machining, parts presentments, and material resupply, increasing cobots and autonomous flexible robots will allow uncaged robots to work alongside humans. We are also deploying cognitive robotic systems, which do deep learning– based visual quality control and line balancing in the food industry. Projects that reimagine the current working cells that require substantial changeover time, are being ‘roboticized’ to react to a variety of inputs without large changeover time.
Smart warehouse and factory showcase
Keeping up with the requirements of Industry 4.0, we are working towards setting up a smart warehouse with people-less operations on the shop floor. A schematic of this warehouse is shown in Figure 1. Once an order is placed by a user, it will be processed by an order fulfillment system, which will feed the necessary data to a cloud robotics system that will manage the operation of multiple robots. The cloud platform will carry out compute-heavy tasks such as image processing, task allocation among robots, path planning, and real-time status checks of the robots. Individual robot agents will interact with each other either directly or through cloud mediation to get information about the goals to be accomplished, dynamic map updates, and changing order priorities.
Looking ahead: smarter, cheaper, and in the cloud
In order to realize the full potential of robotics opportunities in multiple industries, it is important to address some of the research challenges related to computer vision, hand-eye coordination, imitation learning, and hardware-agnostic control design. We present some developments that are already happening now and will gain widespread adoption in the coming years.
Programming by demonstration: Robots can be endowed with the ability to learn from their human operators quickly through a paradigm called ‘programming by demonstration’ (PbD)5 or ‘imitation learning’6, without undergoing time-consuming programming debugging-testing cycles. The operators will be able to effectively interact with these robots through natural modes of human communication, such as gestures, body language, gaze, physical contact (kinesthetic teaching), and even emotions.
Flexible assembly lines: Smart robots will eventually reduce the overall cost of robot-based automation, thereby making it accessible and affordable to small and medium enterprises (SMEs) who so far could not pay for complex assembly lines that were required to guide the conventional robots currently being used in factories. One such example is the project ‘Factory-in-a-day’7,8 that aims at reducing installation time and cost, thereby allowing smaller enterprises to rent or lease robots and set up their factories within a single day.
Cloud robotics platforms: These will leverage the advances made in the areas of cloud computing and communication technologies to manage a large number of robot swarms spread densely or sparsely over a large area. Such a framework will make it easier for robots to learn from humans as well as other machines, collaborate and cooperate with each other, provide reliability of operations, in the event of failures, to one or more individual units, and increase efficiency and productivity of operations through optimal planning, task allocation, and decentralized decision-making. Combining cloud robotics, vendor neutral robot skills and action recipes, reconfigurability of work environment and goals, deep learning and symbolic AI with IT-enabled technologies, it is possible to offer various robotbased services through a RaaS9,10 model. A RaaS vendor delivers business process fulfillment on a pay-as-you-use model, leasing robots, machines, and software, as necessary. The key to this model is effective utilization of base robots across industries, domain-specific End of Arm Tooling (EOAT), and rapid synthesis of task-specific controllers with a fleet of cheap robots.
Amazon Robotics has been conducting robotics challenges for the past few years, to encourage innovation in the area of warehouse robotic automation. TCS partnered with the Indian Institute of Technology, Kanpur (IITK) and participated in the 2016 and 2017 editions of this challenge.
In these competitions, the participant robots are expected to pick and stow items autonomously from and to a given rack. The picking task involves moving items from a rack and placing them in a tote, while the stowing task involves moving items from the tote to the rack. The objects to be picked or stowed are general household items most commonly ordered through the Amazon web store. They vary greatly in size, shape, appearance, hardness, and weight. Since there is no constraint on how the products are organized on the rack or the tote, there are several possibilities of conﬁguration one might encounter during the actual operation. This uncertainty that may arise due to factors such as occlusion, variation in illumination, pose, viewing angle, and so on, makes the problem of autonomous picking and stowing extremely challenging.
The 2017 edition of the challenge had a packing task: to pick and pack in three most used Amazon shipping boxes. Unlike the 2016 challenge, where all the objects to be picked and stowed were known in advance, in 2017, the teams were given 20 new items 45 minutes prior to the start of the challenge in which they had to train their software for object recognition. The rack designs were left to the teams, with only volume constraints given, which in effect made the clutter even more dense.
Some of the major innovations by the TCS-IITK team for the ARC 2017 event were as follows:
• An automated data collection system that could generate thousands of labeled images for training deep networks
• A deep learning-based object detection and segmentation system that was capable of recognizing objects in extreme clutter with, high (>95%) accuracy
• An online motion planning system capable of generating trajectories with variable speed while avoiding collisions with obstacles on its way
• A hybrid EOAT that combined suction with gripping action to pick or grasp various kinds of objects, including deformable objects
• Preemptive state machine architecture and suitable parallelization, to finish all picks in half the allotted time
The results of the 2017 ARC challenge at Nagoya yielded rich rewards for the IITK-TCS team. The team had a podium finish securing third place in the pick event, qualifying for finals and finishing in fourth position in the final round. The IITK-TCS team was ahead of several well-known industrial houses (Panasonic, Mitsubishi, and Toshiba) and leading academic institutes (the Massachusetts Institute of Technology, Princeton University, Carnegie Mellon University, the KTH Royal Institute of Technology, and Duke University).
1 International Federation of Robotics (IFR), Executive Summary World Robotics 2017 Industrial Robots, 2017.
2 Markets and Markets, Industrial Robotics Market by Type (Articulated, Cartesian, SCARA, Parallel, Collaborative Robots), Industry (Automotive, Electrical & Electronics, Metals & Machinery, Pharmaceuticals & Cosmetics), and Geography - Global Forecast to 2023, July 2017
3 Markets and Markets, Collaborative Robots Market by Payload Capacity (Up to 5 kg, Up to 10 kg, Above 10 kg), Industry (Automotive, Electronics, Metals & Machining, Plastics & Polymer, Food & Agriculture, Healthcare), Application, and Geography - Global Forecast to 2023, September 2017.
4 Intermodal Transport, Cargo Loading and Unloading Efficiency Analysis in Multimodal Transport, July 8, 2014.
5 International Joint Conference on Neural Networks (IJCNN), Learning Stable Movement Primitives by Finding a Suitable Fuzzy Lyapunov Function from Kinesthetic Demonstrations, July, 2018.
6 Factory-in-a-day, Plug&Work Robots, 2013.
7 IEEE Transactions on Automation Science and Engineering, Rapyuta: A cloud robotics platform, 2015. 10 Tata Consultancy Services, Robotics-as-a-Service: Transforming the Future of Retail, 2015.