Types of Artificial Intelligence, Explained

Publication
Article
Dental Products ReportDental Products Report December 2022
Volume 56
Issue 12

No longer just science fiction, artificial intelligence is an evolving technology that is making its way into the dental practice.

©Zoa-Arts / stock.adobe.com

©Zoa-Arts / stock.adobe.com

"I’m sorry, Dave, I’m afraid I can’t do that,” chimed HAL, the artificial intelligence (AI)-powered supercomputer in 2001: A Space Odyssey, ominously. Many people think of this iconic moment when they envision AI technology: A computer has reached sentience to the disadvantage of the humans working with it.

Although our technology is still far from the sentient HAL sabotaging astronauts’ attempt to deactivate it, AI has undeniably infiltrated our daily lives and almost every industry and field, including dentistry. Luckily, our CBCT machines and practice management software have yet to revolt against us, instead integrating AI for valuable diagnostic, treatment planning, and practice management support.

AI, or the ability of a program to mirror human problem-solving capabilities through pattern recognition, falls under 2 large umbrellas: Advanced performance and limited functionality. The more evolved AI can perform tasks with a humanlike level of proficiency (ultimately, think HAL). We have not reached this apex in AI technology, but we do have limited functionality at our disposal. Although the technology is less evolved, it provides invaluable assistance in streamlining and automating tasks.

Within these 2 umbrellas, AI is generally broken down into 4 categories based on its ability to “think.” These categories are reactive, limited memory, theory of mind, and self-aware.1 AI can also be rolled into an alternate classification system consisting of artificial narrow intelligence (ANI), artificial general intelligence (AGI), and artificial superintelligence (ASI).1

Reactive AI

The oldest and most basic form, reactive AI, draws conclusions from patterns in data by using statistical models and algorithms. Reactive machine-learning models analyze huge amounts of data to identify patterns and produce output.

One example of reactive AI is the spam filter. Spam filters analyze text to determine whether the presence and repetition of certain keywords should trigger a move to the spam folder.2 Netflix3 and other streaming services also employ AI to create movie and television recommendations based on previously consumed content. AI helps power Netflix’s recommendation algorithm and shape the company’s catalog of movies and TV shows.3

You may recall IBM’s Deep Blue chess-playing computer that defeated world champion Garry Kasparov in 1997—this is another example of reactive AI.4 During the match against Kasparov, the machine was able to identify pieces on a chessboard and understand how each moved. Deep Blue was also able to make predictions about its opponent’s next moves and choose the best next move from a range of possibilities.4 However, reactive AI does not have any memory of what happened before. Aside from a specific chess rule about avoiding use of the same move more than 3 times in a row, Deep Blue was only able to assess the board in present time.4

Limited Memory AI

To enhance the machine’s ability to form these more accurate conclusions, the machine needs to be able to learn. Limited memory machines can make the reactive decisions of reactive AI, but also can learn from past input.1 This type of AI stores large volumes of data and experiential knowledge to use as references for solving problems in the future.1 It filters through this data to make predictions and inferences about what will happen and can evolve to make improved decisions going forward.1

Most current applications of AI fall under this category, including self-driving cars, chatbots, and virtual assistants.1 Self-driving cars monitor the environment and movement of traffic and analyze previous patterns to make decisions on the road. The cars learn from past experiences and behaviors to predict future outcomes, making them fairly reliable (except for the 400 accidents that happened in cars with automated technology from July 2021 to May 15, 2022).5

Another example of limited memory AI is image recognition. This software views and analyzes hundreds of thousands of images and stores them in its memory, allowing it to learn what the different objects are and be able to identify them in the future. The more images the program views, the more it learns, and the quicker its accuracy improves.

This type of AI is being used extensively in dentistry, particularly in endodontic and orthodontic treatment planning and caries detection. Dentistry primarily implements 2 types of AI: artificial neural networks (ANNs) and convolutional neural networks (CNNs). ANNs are modeled on neural networks in the brain. They can identify patterns in data and learn to recognize the patterns going forward. CNNs analyze visual images, which is particularly helpful in diagnostics and in evaluating dental radiography.

By viewing thousands of past scans and images, AI-powered software is able to identify caries with increasing accuracy. The industry is finding these machines to be impressively effective. A 2020 study had AI-powered software examine images from 109 different patients that contained 153 periapical lesions. The AI system detected 142 of the total 153 lesions, for an accuracy rate of 92.8%.6 Other research has had similar results.

“If you look at our clinical trial results, the technology is able to provably surface an average of 37% more disease than human practitioners are able to identify, which is a massive amount if you think about it,” says Ophir Tanz, founder and CEO of Pearl. “And it’s not surprising, because if you look at a radiograph and that radiograph has maybe some interproximal caries, a periodical lesion, and calculus—these are really easy to miss a lot of the time because they’re small and they’re faint. But it’s important to catch this disease early, so you can issue the best care, and the platforms are helping with that.”

Pearl’s Second Opinion® dental AI platform automatically detects a range of conditions (including caries, calculus, margin discrepancies, periapical lesions, and more) in dental x-rays for patients 12 and older. Pearl designed the technology to serve as a second set of “eyes” for the dentist; the technology doesn’t replace the expertise of the dentist but can assist in identifying problem areas. Tanz believes that this technology can support the dentist in providing better care.

“Our clinical trials have shown that computers are able to identify pathology in a way that is superior to humans,” Tanz says. “It shouldn’t be so surprising, since you have machines that are able to beat every conceivable human at chess.”

Eric Giesecke, chief executive officer of Planet DDS, has also seen the benefits of AI-supported care and expects the use of AI in dentistry to keep growing across the clinical and practice-management sides, thanks to its many benefits.

“We hear from the market that dentists are most excited about the caries detection use case for AI,” he says. “As practices are grappling with hiring shortages, inflation, and changing patient expectations, practices that implement AI for caries detection will help patients feel more confident and informed about their diagnosis and treatment options. With the support of AI, dentists can increase case acceptance and even accelerate claims processing. AI also improves the experience for dentists by allowing them to leverage machine learning to deliver better patient outcomes faster.”

All existing AI (reactive and limited memory) falls under the classification of ANI.1 ANI is an AI system that is limited to performing autonomous tasks with humanlike capabilities but cannot exceed its programming (giving it a “narrow” range of abilities).1 These types of AI may be able to do a task faster or better than humans, but they can only do the tasks for which they are designed.1 Even our most advanced AI today, such as deep learning and machine learning (or technology that can grow and learn), is considered ANI.

Theory-of-Mind AI

Although limited memory and reactive AI are in use, theory-of-mind AI is under development and only exists conceptually. The goal of theory-of-mind AI is for the technology to understand the intents (such as emotions, beliefs, thought processes, needs, and goals) of the individuals it is interacting with.1 The term theory-of-mind, co-opted from psychology, means that humans’ feelings and thoughts affect their behavior. To reach this level, AI systems would need to understand this and adjust their responses accordingly.

“Understanding” is the primary roadblock to the development of theory-of-mind AI; although AI can identify periapical lesions, it does not understand what it has identified or why it is important. Theory-of-mind AI would be able to understand the motives behind this and would be able to learn with fewer examples because of that. This milestone has not been achieved, but it could have far-reaching implications.

Self-Aware AI

AI-wary folks can breathe easy: Self-aware AI is far down the road. This type of AI, such as the infamous HAL, is so similar to the human brain that it will be self-aware.1 Essentially, HAL is just as smart, understanding, and competent as humans—maybe more so. Self-aware AI will be able to understand, evoke, and even feel emotion (think Sonny from I, Robot), which could put it at odds with human intentions.1

This future AI (as well as theory-of-mind AI) falls under the categories of AGI and ASI, in which AI systems can understand things completely, like a human. AGI systems will be able to learn, understand, and respond as a person can. They will form connections across different systems, allowing them to grow and have multiple areas of competency.1

ASI takes everything from AGI and elevates it to the next level. In addition to being able to replicate the many facets of human intelligence, this AI will simply be better—faster, smarter, sharper, with improved memory and the ability to analyze data and respond quicker than any person.1 Although the temptation of having such high-powered machines at our disposal may be there, it may be in the best interest of humans that we not let AI get too smart (at least, according to about 100 feature films and books).

With AI constantly improving and developing, dentists should be on the lookout for solutions that can benefit their practices and, ultimately, the patient care they provide. AI will never replace the clinician in the operatory, but from treatment planning to practice management, marketing to diagnostics, CAD/CAM to CBCT, all areas of dentistry will continue to see AI integration and benefit from the streamlining and assistance it can provide. AI’s potential to revolutionize the dental field through the development of solutions to different problems will ultimately make the dental industry stronger. Although we may not be ready for our robot overlords, we can certainly embrace the intuitive and predictive technology that can streamline and support daily dental practice tasks.

References

  1. Joshi N. 7 types of artificial intelligence. Forbes. June 19, 2019. Accessed October 1, 2022. https://www.forbes.com/sites/cognitiveworld/2019/06/19/7-types-of-artificial-intelligence/?sh=45e12add233e
  2. Dickson B. How machine learning removes spam from your inbox. TechTalks. November 30, 2020. Accessed October 1, 2022. https://bdtechtalks.com/2020/11/30/machine-learning-spam-detection/
  3. Machine learning: learning how to entertain the world. Netflix Research. Accessed October 3, 2022. https://research.netflix.com/research-area/machine-learning
  4. Hintze A. From reactive robots to sentient machines: the 4 types of AI. Live Science. November 14, 2016. Accessed October 3, 2022. https://www.livescience.com/56858-4-types-artificial-intelligence.html
  5. Associated Press. Nearly 400 car crashes in 11 months involved automated tech, companies tell regulators. NPR. June 15, 2022. Accessed October 3, 2022. https://www.npr.org/2022/06/15/1105252793/nearly-400-car-crashes-in-11-months-involved-automated-tech-companies-tell-regul
  6. Orhan K, Bayrakdar IS, Ezhov M, Kravtsov A, Özyürek T. Evaluation of artificial intelligence for detecting periapical pathosis on cone-beam computed tomography scans. Int Endod J. 2020; 53(5):680-689. doi:10.1111/iej.13265
Related Videos
CDS 2024: What's New at TAG University? with Andrew De la Rosa, DMD
CDS 2024: Breaking Down Barriers to Care with Eric Kukucka, DD
Greater New York Dental Meeting 2023 – Interview with Len Tau, DMD
Greater New York Dental Meeting 2023 – Interview with Hope Slowik
Greater New York Dental Meeting 2023 – Interview with Branden Neish, MBA
Greater New York Dental Meeting 2023 — Interview with Shannon Carroll, RDH
Greater New York Dental Meeting 2023 – Interview with Edward Goldin, DDS
Greater New York Dental Meeting 2023 – Interview with Adam McDaniel from Henry Schein One
© 2024 MJH Life Sciences

All rights reserved.