January 22, 2025
AI Snake Oil
Arvind Narayanan
Professor of Computer Science at Princeton University and Director of Its Center for Information Technology Policy
AI Snake Oil
Arvind Narayanan
Professor of Computer Science at Princeton University and Director of Its Center for Information Technology Policy
Minutes of the 15th Meeting of the 83rd Year
President George Bustin called the meeting of the 83rd year of Old Guard to order. Julie Elward-Berry read the minutes from the previous meeting. The following guests (and their hosts) were introduced: Paul Gerard (Irv Urken); Terese Rosenthal (Cynthia Maltenfort); Steve Lin (Lee Gladden);
Carol Anderson (Ann Damsgaard); Hunt Stockwell (Russ White).
There was a moment of silence in memoriam for William A. Sweeney, a 10-year member who died this month.130 Old Guard members and guests attended the meeting.
Micky Weyeneth introduced Arvid Narayanan, Professor of Computer Science at Princeton University and Director of Its Center for Information Technology Policy.
Narayanan led the Princeton Web Transparency and Accountability Project to uncover how companies collect and use personal information of the general public. His work was among the first to show how machine learning reflects cultural stereotypes, and his doctoral research showed the fundamental limits of de-identification. Narayanan was recognized in the inaugural TIME 100 AI list for his newsletter. He is also a recipient of the Presidential Early Career Award for Scientists and Engineers (PECASE). He is the author of the recently published book, "AI Snake Oil, What Artificial Intelligence Can Do, What it Can't, and How to Tell the Difference."
Professor Narayanan started the talk by saying he was optimistic about Artificial Intelligence in the long run, but we need a balanced approach to it. He proposed the following as a guiding principle with respect to AI, “Just because we can, doesn’t mean we should.”
But what is AI? Conventional use currently portrays AI as an umbrella term for loosely related technologies that companies often market as “AI.” Business often over-markets it.
There are four distinct types of AI:
I. Predictive: AI which makes decisions about people based on predictions about the future. For example, will someone pay back a loan or contract cancer?
II. Generative: AI that generates text, pictures, or other media. This form has most captivated the general public. Think of AI as writing an article or producing a false image.
III. Social Media Algorithms: This is AI that recommends content or enforces policies – for example, what’s acceptable or not on a website. It powers activities and serves as a collective force over societies.
IV. Robotics: These are drivers of industrial products such as drones or self-driving cars.
Professor Narayanan suggested that use of technological terms and concepts evolve. Today, people tend to think of AI as whatever has not been done yet. However, experience has shown that, what was once cutting-edge technology, later becomes commonly accepted. He cited web searches and automatic pilots in planes as examples of innovations that have become routine parts of our daily life. How will society regard self-driving cars in twenty years?
It is hard to predict the future, and Predictive AI doesn’t change that. Research has shown the AI predictions are only a few percentage points better than more commonly accepted metrics. Machine bias is a problem as AI predictions are based on past data. The chance that someone might become a criminal tomorrow is based on crime statistics in the past. As society changes, use of past data becomes questionable. Predictive AI can be harmful when it works and when it doesn’t. Face recognition is an example.
Generative AI has become indispensable, but its capabilities and dangers have been exaggerated. It is a “rethinking tool” that produces input based on past data. It might generate a quote based on what someone might have said versus what they actually said. This is why AI may generate a false narrative.
Although AI can pass professional exams such as the bar, it does not measure real-world metrics and skills needed to perform the job. Access to a data base of knowledge is only one tool for success.
AI has tremendous potential in medicine. It can read images accurately and amazingly fast. However, it needs to be married with human input so that doctors can manage the risk.
Another problem with AI is its uneven costs and benefits. He cited the problem that Open AI pays Kenyan workers $2 an hour to develop inputs into the system while a few profit hugely. This is social problem.
In conclusion, Professor Narayanan said that no one can predict how AI will work in the future. The key to its success will be to the extent to which we can adapt our institutions to use AI responsibly.
Respectfully submitted,
James Hockenberry