Those are some of the observations drawn from a recently released report from experts convened by Stanford University and the newest installment out of the One-Hundred-Year Study on Artificial Intelligence (AI100), an exceptionally long-term effort to track and monitor AI as it progresses over the coming century. The AI100 standing committee is led by Peter Stone, a professor of computer science at The University of Texas at Austin, and executive director of Sony AI America, and the study panel that authored the report was chaired by Michael Littman, professor of computer science at Brown University. The AI100 authors urge AI be employed as a tool to augment and amplify human skills. “All stakeholders need to be involved in the design of AI assistants to produce a human-AI team that outperforms either alone. Human users must understand the AI system and its limitations to trust and use it appropriately, and AI system designers must understand the context in which the system will be used.” AI has the greatest potential when it augments human capabilities, and this is where it can be most productive, the report’s authors argue. “Whether it’s finding patterns in chemical interactions that lead to a new drug discovery or helping public defenders identify the most appropriate strategies to pursue, there are many ways in which AI can augment the capabilities of people. An AI system might be better at synthesizing available data and making decisions in well-characterized parts of a problem, while a human may be better at understanding the implications of the data – say if missing data fields are actually a signal for important, unmeasured information for some subgroup represented in the data – working with difficult-to-fully quantify objectives, and identifying creative actions beyond what the AI may be programmed to consider.” Complete autonomy “is not the eventual goal for AI systems,” the co-authors state. There needs to be “clear lines of communication between human and automated decision makers. At the end of the day, the success of the field will be measured by how it has empowered all people, not by how efficiently machines devalue the very people we are trying to help.” The report examines key areas where AI is developing and making a difference in work and lives: Discovery: “New developments in interpretable AI and visualization of AI are making it much easier for humans to inspect AI programs more deeply and use them to explicitly organize information in a way that facilitates a human expert putting the pieces together and drawing insights,” the report notes. Decision-making: AI helps summarize data too complex for a person to easily absorb. “Summarization is now being used or actively considered in fields where large amounts of text must be read and analyzed – whether it is following news media, doing financial research, conducting search engine optimization, or analyzing contracts, patents, or legal documents. Nascent progress in highly realistic (but currently not reliable or accurate) text generation, such as GPT-3, may also make these interactions more natural.” AI as assistant: “We are already starting to see AI programs that can process and translate text from a photograph, allowing travelers to read signage and menus. Improved translation tools will facilitate human interactions across cultures. Projects that once required a person to have highly specialized knowledge or copious amounts of time may become accessible to more people by allowing them to search for task and context-specific expertise.” Language processing: Language processing technology advances have been supported by neural network language models, including ELMo, GPT, mT5, and BERT, that “learn about how words are used in context – including elements of grammar, meaning, and basic facts about the world – from sifting through the patterns in naturally occurring text. These models’ facility with language is already supporting applications such as machine translation, text classification, speech recognition, writing aids, and chatbots. Future applications could include improving human-AI interactions across diverse languages and situations.” Computer vision and image processing: “Many image-processing approaches use deep learning for recognition, classification, conversion, and other tasks. Training time for image processing has been substantially reduced. Programs running on ImageNet, a massive standardized collection of over 14 million photographs used to train and test visual identification programs, complete their work 100 times faster than just three years ago.” The report’s authors caution, however, that such technology could be subject to abuse. Robotics: “The last five years have seen consistent progress in intelligent robotics driven by machine learning, powerful computing and communication capabilities, and increased availability of sophisticated sensor systems. Although these systems are not fully able to take advantage of all the advances in AI, primarily due to the physical constraints of the environments, highly agile and dynamic robotics systems are now available for home and industrial use.” Mobility: “The optimistic predictions from five years ago of rapid progress in fully autonomous driving have failed to materialize. The reasons may be complicated, but the need for exceptional levels of safety in complex physical environments makes the problem more challenging, and more expensive, to solve than had been anticipated. The design of self-driving cars requires integration of a range of technologies including sensor fusion, AI planning and decision-making, vehicle dynamics prediction, on-the-fly rerouting, inter-vehicle communication, and more.” Recommender systems: The AI technologies powering recommender systems have changed considerably in the past five years, the report states. “One shift is the near-universal incorporation of deep neural networks to better predict user responses to recommendations. There has also been increased usage of sophisticated machine-learning techniques for analyzing the content of recommended items, rather than using only metadata and user click or consumption behavior.” The report’s authors caution that “the use of ever-more-sophisticated machine-learned models for recommending products, services, and content has raised significant concerns about the issues of fairness, diversity, polarization, and the emergence of filter bubbles, where the recommender system suggests. While these problems require more than just technical solutions, increasing attention is paid to technologies that can at least partly address such issues.”