The new autonomy environment
Automation has historically been pitched as a replacement for “dull, dirty and dangerous” jobs, and that continues to be the case, whether it be work in underground mines, offshore infrastructure maintenance or, prompted by the pandemic, in medical facilities. Removing humans from harm’s way in sectors as essential and varied as energy, commodities and health care remains a worthy goal.
But self-directed technologies are now going beyond those applications, finding ways to improve efficiency and convenience in everyday spaces and environments, says Kimmel, thanks to innovations in computer vision, artificial intelligence, robotics, materials and data. Warehouse robotics have evolved from glorified trams shuttling materials from A to B into intelligent systems that can range freely across space, identify obstacles, alter routes based on stock levels and handle delicate items. In surgical clinics, robots excel at microsurgical procedures in which the slightest human tremor has negative impacts. Startups in the autonomous vehicle sector are developing applications and services in niches like mapping, data management and sensors. Robo-taxis are already commercially operating in San Francisco and expanding from Los Angeles to Chongqing.³
As autonomous technology steps into more contexts, from public roads to medical clinics, safety and reliability become simultaneously more important to prove and more difficult to assure. Self-driving vehicles and unmanned air systems have already been implicated⁴ in crashes and casualties. “Mixed” environments, featuring both human and autonomous agents, have been identified as posing novel safety challenges.⁵
The expansion of autonomous technology into new domains brings with it an expanding cast of stakeholders, from equipment manufacturers to software start-ups. This “system of systems” environment complicates testing, safety, and validation norms. Longer supply chains, along with more data and connectivity, introduce or accentuate safety and cyber risk.
As the behaviour of autonomous systems becomes more complex, and the number of stakeholders grows, safety models with a common framework and terminology and interoperable testing become necessities. “Traditional systems engineering techniques have been stretched to their limits when it comes to autonomous systems,” says Kimmel. “There is a need to test a far larger set of requirements as autonomous systems are performing more complex tasks and safety-critical functions.” This need is, in turn, driving interest in finding efficiencies to avoid test costs ballooning.
That requires innovations like predictive safety performance measures and preparation for unexpected “black swan” events, Kimmel argues, rather than relying on conventional metrics like mean time between failures. It also requires ways of identifying the most valuable and impactful test cases. The industry needs to increase the sophistication of its testing techniques without making the process unduly complex, costly or inefficient. To achieve this goal, it may need to manage the set of unknowns in the operating mandate of autonomous systems, reducing the testing and safety “state space” from being semi-infinite to a testable set of conditions.
Autonomous system toolkit and testing
The toolkit for autonomous system safety, testing and assurance continues to evolve. Digital twins have become a development asset in the autonomous vehicles space. Virtual and hybrid “in-the-loop” testing environments are allowing system-of-system testing that includes components developed by multiple organizations across the supply chain and reducing the cost and complexity of real-world testing through digital augmentation.
Model-based systems engineering is a full life cycle approach that uses modelling to explore the behaviour of a system, the interactions of components and intersections with potential future environments. This allows for the simulation and prediction of system behaviour under different circumstances, enabling developers to proactively seek weaknesses or threats. These and other methodologies will change how AI- and robotic-powered products are developed and validated, ultimately reducing cost and time to market.
Over time, Kimmel predicts, safety and testing collaboration between ecosystem partners will itself generate new standards and leading practices for validation and verification, paving the way for seamless, safe and widespread deployment of autonomous systems across sectors.
EY-Parthenon teams support original equipment manufacturers (OEMs) in autonomous systems integration. This includes developing safety strategies and performance indicators, helping with data for training of autonomous systems, training algorithms and developing digital twins, such as digitizing human-defined “road rules,” that could boost transparency in autonomous vehicle safety. “We also support the development of testing and evaluation tools that create interoperable live virtual constructive test environments, and cataloguing performance data and creating ‘test databases,’ including common operating cases and known risks,” says Kimmel. “This allows participants to benchmark performance, for instance, on issues like pedestrian interactions as a factor for autonomous vehicle safety.”
Looking to the future, Kimmel outlines five coming trends in the autonomous systems industry.
- Trust will be key for autonomous systems, both for consumers and regulators. As a result, companies are building cultures of safety and risk management, such as through safety management systems (SMS).
- Interoperability and virtual testing will become an imperative. Different systems may need to interact effectively with one another and be tested together in virtual test environments. These environments and testing toolchains will become able to assess performance in a large range of potential scenarios and conditions far more quickly than physical testing can.
- Safety performance indicators will level up. The industry likely needs to shift from conventional approaches, like numbers of crashes or failures, to predictive metrics like incursions into a “safety envelope,” erratic or unpredictable motion control, and latency — and to provide evidence of the predictive power of these new metrics.
- Standards and common verification systems will offer credibility as emerging technologies scale. Without standards, a fragmented approach to safety may prove detrimental to the industry. Companies that take proactive approaches to shaping and complying with standards can reduce risks and build a competitive advantage.
- Governments will take a proactive role to both to regulate and accelerate. Government’s function both as regulators and as catalysts for R&D, raising safety concerns and also accelerating development of strategies and enabling technologies for safer AI and robotic systems.
This article originally appeared in MIT Tech Review.
Summary
The complexity and safety of autonomous applications and their respective operational environments requires trust in these systems before they can be deployed. Collaboration with commercial organizations across several domains to generate new standards and best practices on validation and verification of autonomous systems, may ultimately pave the way for trust and widespread deployment.