When, Where and How AI Should Be Applied

Phil Koopman dissects strengths and weaknesses of machine learning based AI

AI does amazing stuff. No question about it.

But how hard have we really thought about “machine-learning capabilities” for applications?

Phil Koopman, professor at Carnegie Mellon University, delivered a keynote on Sept. 11, 2024 at the Business of Semiconductor Summit, (BOSS 2024), concentrating on big-picture AI capabilities.

Instead of parsing the AI’s underlying mechanisms and models, Koopman intentionally lifted his AI talk to a very high-level.

Why?

Because AI is no panacea.

For anyone developing a new AI-infused product — most likely you are — it’s crucial to remain vigilant about strengths and weaknesses in machine learning.

Machine learning-based AI can obviously perform well when applied to “common case” products. But even a common product could end up generating outcomes unforeseen and not imagined. Remember that “good enough” performance can be bad enough particularly in safety-critical products.

In his presentation, Koopman stacks examples of the good, the bad and the ugly in AI applications. Depending on when, how and where machine learning is applied, the results can range from being amazing to devastating.

In this era, AI is worshiped as the universal problem-solver. Even when machine learning-based AI fails to produce accurate answers, we often let it pass, trusting that AI will “learn” over time.

Here’s the fly in the ointment.

As we apply human behavioral terms to machine learning capabilities, saying that AI is “smart,” has “bias,” continues to “learn,” and even “hallucinates,” we tend to forget that machine learning-based AI is built on statistics. Koopman stressed the fallacy of our “projecting ‘truth’ and awareness into AI.”

AI, in fact, is not our friend. It certainly isn’t human.

As Koopman made clear, “Machine Learning-based AI has no self-awareness.” AI doesn’t even “understand” what it is doing.

Koopman’s talk brings AI’s capabilities — too often left unquestioned — down to earth, bluntly. He said that AI’s “hallucinating” should be called, more accurately, “bullshitting.”

Junko Yoshida
Editor in Chief, The Ojo-Yoshida Report


This article was published by the The Ojo-Yoshida Report. For more in-depth analysis, register today and get a free two-month all-access subscription.

Here you’ll find a wealth of practical technical insights and expert advice to help you bring AI and visual intelligence into your products without flying blind.

Contact

Address

Berkeley Design Technology, Inc.
PO Box #4446
Walnut Creek, CA 94596

Phone
Phone: +1 (925) 954-1411
Scroll to Top