Wisdom is at risk of getting lost in the noise of artificial intelligence (AI). It’s too early to comingle the two. Remember wisdom? It’s our individual organically head-grown, genuine, subjective slice of intelligence, borne from knowledge and experience, that leads us to insight, imagination, judgment, discretion, and compassion. Unlike calculable or measurable facts, your wisdom differs from mine and ours from others. This is what makes wisdom perhaps both the highest form of intelligence and the most difficult aspect of intelligence to replicate via artificial means.
This article is not cautionary in nature – you can expect no conspiracy theories or doomsday hysterics here. I leave those concerns and discussions to others. Rather, there is a decidedly much more optimistic school of thought. Eliza Kosoy, a researcher in MIT’s Center for Brains, Minds, and Machines, believes, “With enough data and the correct machine learning, machines can make life more enjoyable for humans.”1
Does AI exist yet? To a small extent, yes. AI is here, but there is much more to come. A Google search for “types of AI” reveals a disagreement as to how many there are and how each is defined. It’s a fluid area of study that is expanding rapidly. Arguably, one of the more interesting possibilities is known as artificial general intelligence (AGI). AGI “is the ability of an AI agent to learn, perceive, understand, and function completely like a human being.”2 Current publications remind us that “no true AGI systems exist; they remain the stuff of science fiction.”3 Regarding when legitimate AGI may make its debut, Oxford professor Max Roser, director of Our World in Data, concludes, “There is large agreement in the overall picture. The timelines of many experts are shorter than a century, and many have timelines that are substantially shorter than that.”4
If not AGI, then what type of AI does exist now, and how long has it been here? Although the recent popularity of ChatGPT may make it seem that AI is a new conversation, some may be surprised to learn that AI was formally conceptualized almost 70 years ago. In 1955 a group of Dartmouth researchers anticipated artificial intelligence as, “every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it.”5 ChatGPT is a generative form of AI which is an example of a broader AI type called artificial narrow intelligence (ANI). ANI is less sophisticated than the AGI mentioned above. In reality, “This type of artificial intelligence (ANI) represents all the existing AI, including even the most complicated and capable AI that has ever been created to date.”2 Clearly the suggestion is that further developments will be plentiful and substantial.
Although in its infancy, I am confident that AI will positively affect all homecare operating software, including ours at SwyftOps where AI already has an incrementally increasing foothold. Agency workflows, and the experiences of those who provide care and those who receive care, will advance correspondingly. These improved workflows and experiences will improve efficiency and quality for us all.
But to what extent and how soon will AI advantages influence our industry? Perhaps not as quickly as we might be led to believe. Some have been too quick to apply the AI label to their efforts, whether as an innocent unawareness or as an intentional ploy to exploit the marketing sparkle of the latest “new and improved” gimmick. Stephen Khan, executive editor of The Conversation International, explains, “decision-making systems, automation, and statistics are not AI. To qualify as AI, a system must exhibit some level of learning and adapting.” So, while the results of sequential, circumstantial, or problem-solving algorithms may be useful and indeed remarkable, such processes are not necessarily AI. Therefore, contemporary software capabilities such as pairing caregivers with clients based on multiple attributes (“matching”) or behavior probabilities based on historical data (“predictive scheduling”) are likely not AI at all. In fact, most competent homecare operating systems have had these features in place for years.
After evaluating present and future operational capabilities of ChatGPT and its alternatives, SwyftOps has chosen to use Azure OpenAI as its generative conversational AI tool. Among its advantages, unlike ChatGPT it is HIPAA compliant.6 A current iteration of this technology within SwyftOps is an ability to refine answers from clients on social history questionnaires into easy-to-read summaries that help caregivers and clients become better acquainted.
Homecare professionals will benefit from ensuring that their core operational system provider is truly AI wary. It may be just getting started, but AI has progressed sufficiently to qualify a passive stance as reckless. Meanwhile, it’s important to understand that there is a responsive aspect to AI preparedness. There is much yet to be revealed. One can map the routes and prepare the railroad beds but cannot fully lay a network of track for a train of unknown gauge/width. The AI winners in homecare will be those who adopt a measured, moderately aggressive approach. That is, those who are purposely positioned and ready to deploy emerging AI breakthroughs as they become commercially viable.
We and our competitors have yet to exhaust the technical opportunities of automation, to say nothing of AI. Our journey is just beginning, and the extent to which it will take us is new territory. But perhaps we can begin to imagine some of that territory if we try – if we rely on real intelligence – genuine knowledge and experience – that is, if we allow our wisdom to inform us.
1 Carolyn Blais (MIT School of Engineering) “When will AI be smart enough to outsmart people?”
2 Naveen Joshi (Forbes) “7 Types of Artificial Intelligence” (June 2019) annotation added
3 Cameron Hashemi-Pour and Ben Lutkevich (TechTarget) “artificial general intelligence (AGI)” (November 2023)
4 Max Roser (Professor of Practice in Global Data Analytics at the University of Oxford) “AI timelines: What do experts in artificial intelligence expect for the future?” (February 2023)
5 J McCarthy (Dartmouth), M L Minsky (Harvard), N Rochester (IBM), C E Shannon (Bell Labs) “A Proposal for the Dartmouth Summer Research Project on Artificial Intelligence” (August 1955)
//efaidnbmnnnibpcajpcglclefindmkaj/http://jmc.stanford.edu/articles/dartmouth/dartmouth.pdf.
6 Steve Alder (HIPAA Journal), “Is ChatGPT HIPAA Compliant?” (December 2023)
www.hipaajournal.com/is-chatgpt-hipaa-compliant/#:~:text=ChatGPT%20is%20not%20HIPAA%20compliant,covered%20entities%20and%20business%20associates.
© 2024 Aegle Technologies LLC dba SwyftOps