Artificial Intelligence in the U.P.: Tool, Threat, or Paradigm Shift?
What is the impact of “Artificial Intelligence” (AI) on the Upper Peninsula? AI per se has been getting a fair amount of apocalyptic media coverage, and for non-technical readers, gaining an understanding of AI–or even a context for what is being reported–can be confusing at best.
I like to keep things simple, so I usually offer my opinion only after I’m confident any definitions, terminology, fundamental building block ideas, etc. are mutually understood, or at least identified. If there are different views of the basics, it is guaranteed there will be misunderstandings of anything further. So–thank you dear reader for humoring me!
Let me start the answer first with a highly abbreviated discussion of what AI is–and isn’t. And please note, this discussion is my opinion, not a scientific treatise.
Homo Sapiens are literally “tool makers.” Our entire history is defined by discoveries that led to inventions, and inventions that led to tool improvements. Once we discovered optics, we built lenses. Once we started playing with lenses we built microscopes and telescopes. We improved these tools to discover bacteria, germs, solar systems, galaxies, and the Universe.
With each new discovery, some of us focused on improving the tools, while others focused on what we discovered by using the tools. Some innovations took longer others to implement based upon the capabilities of the tools and the prevailing way users adopted them. Of course, the most important part of the equation–the tool users–had an impact on how quickly the discovery/innovation/tool improvement cycle moved, and how quickly the results of the discoveries and innovation could be shared with the rest of us. Human intelligence turns out to be–and remains–an indispensable element of the process.
I will leave the study of intelligence–human or otherwise–as a topic for the reader. I don’t offer opinions on any author or source. I will say that I believe “AI” is a flawed definition for the technology we discuss today. AI–today–isn’t sentient; has no intuition; has no instinct; lacks the capability to self-direct; lacks the full range of sensory inputs or otherwise ability to interact with the World with anywhere near the capacity that any human being can do.
It isn’t curious unless humans design it to be so. It doesn’t independently initiate unless humans design it to do so. AI is a technology–a tool–that creates the illusion of intelligence based upon human-defined and provided data.
What do I mean by this? As an example, let’s explore natural language processing–one of the quintessential “outputs” of the human computer. We use words to communicate. For English, there is an alphabet made up of 26 letters, and a host of various symbols and other characters that help us understand how to translate spoken into written formats and vice versa.
It has rules–vowels, consonants, sentence structures, cases, etc.–all clearly defined that are well understood. For a computer, it would be very easy to evaluate text, and determine how many times the letter “e” showed up in a specific writing sample. It would be equally easy to determine how many times the word “the” showed up, and so on. This could be done for any text. It would be equally easy for it to encode these words as numbers–i.e., “yes” = 1, “no” = 2, and even sentences as numbers–i.e., “Hello my name is” to compare different writing samples and see how frequently the answers are the same or differ.
For a single person, the computer might “discover” that the text (it would have no idea about the author unless we assigned one) used 2,000 different words, had 570 different sentences, etc. Each time it saw the pattern “Hello my name is” it could expect the next word would be a formal name/vowel 99% of the time–and that it needed to provide a “name” in response to that pattern. If it saw a word that didn’t fit in the rule, it would have pre-programmed instructions about what to do next.
So, for that particular text, it is said that it “learned” what the text was trying to communicate, and based upon what it “learned,” it could respond as the user expected.
The predictions about AI – in this case – could be improved by evaluating as much text as possible, both to improve the accuracy of its predictions–i.e., the patterns it recognizes as common or uncommon–as well as capturing exceptions or unexpected patterns–i.e., dialects, mis-statements, etc. The larger the amount of data, the more accurate the statistics and pattern recognition.
And there is much redundancy. For American English, we typically use 20-30,000 words within common vocabulary, out of a total of 170,000 total words in the language. Those volumes aren’t hard for computers to track, and the size of natural language models–i.e., parameters assigned to text for evaluation/prediction purposes–in use today ranges from 35 million to 550 billion or more.
When you contemplate 5 billion+ human beings providing text for the models to evaluate through the Internet every day, the volumes of data–and the pattern recognition–becomes incredibly precise. So much so, that as a single user interacting with the tool, the speed of the responses received appears almost as though more complicated processing was taking place–i.e, not pattern recognition/predicted response, but actual “thought”–was taking place, i.e., the “illusion of intelligence.”
Of course, this example explains the basics of “how” AI works, the computer has no ability to determine “what” is actually correct. The “AI” by itself doesn’t know or define what is correct; it only responds based upon the human provided programming. All of the words we use are based upon human-input sources and definitions. If a human programs the computer to believe the sky is green, it will respond that the sky is green–barring some other human-provided program that changes that data either based upon statistically significant responses, or some other programming to detect and change the answer to what is defined as “correct.”
As one could imagine, programs are designed that allow the computer to “self-learn” based upon the number of interactions that agree with–or contradict – its basic programming. If the original definition of the sky is “green”–and 5 billion answers, or 99% of the input it receives defines the sky as “blue,” the programming will be revised to update the sky to “blue.” The more sophisticated models can do this very fast–they are programmed to be adaptive–and so can quickly start to create the illusion of intelligence based upon the sheer volume of interactions with data.
Of course, such a tool is very powerful; it is a tool applied to one of the core functions of the “human machine.” As the media has stated, it could also be very dangerous for people who don’t understand how it works, or how it could be manipulated.
Since humans define definitions and sources, it is easy to imagine that anything with a human bias would create a biased result. As an extreme example, imagine if somebody decided to use a data source that included all of Adolph Hitler’s speeches and written texts. Predictably, the underlying language model would use Hitler’s words and patterns as the basis for its responses. If the user didn’t know that that is how the AI had been “trained” using Hitler as the source, they might be in for a nasty surprise!
There are other potential challenges with the way the models are trained as well that introduce implicit bias. People rarely state the obvious in either writing or speech, and thus many trivial facts (“people breathe”) are rarely mentioned in text, while uncommon events (“people murder”) are reported disproportionately frequently. AI trained by using all of the world’s news might therefore respond in a manner that suggests murder is more common than breathing! And–since the technology underlying the AI doesn’t experience these types of trivial activities common to organic existence, it doesn’t “learn” a context for how to include them in actual response formulation.
And–of course–the user part of this story bears some note as well–after all that is where the impact to the U.P. will be the most visible. It’s probably safe to say that most people don’t know how AI actually works–or how to even recognize or evaluate AI-based responses. Most of the population can’t recognize “deep fakes”–i.e., fake video or audio recordings, images, etc.–that are generated by human beings, let alone if they are generated by AI.
The victimization of people in the U.P. due to cybersecurity and ransomware attacks is strong evidence that people aren’t completely savvy to Internet technologies other than AI, and how they can be exploited for other than neutral purposes. So, the chances that uninformed people can be manipulated by advanced tools such as AI is extremely high–very likely. In my view, that is the “threat” of AI, and the challenge that we must address if we hope to harness the full value of the tool we’ve created.
So, when can we expect to see how AI might impact the U.P., and what might that look like? As with all technologies, there are some early adopters, but the majority of users typically jump in once all the bugs are worked out. The government took five (5) years to warn everybody that analog television was going away, and that all TVs would need to be digital. The time came and went, and a market started for digital to analog converters for people who wanted to keep their old TVs.
The Internet was established in the 1960s, but it wasn’t opened up for public use until 1993, and the World Wide Web really didn’t take off until 1994. Discovery, innovation, and adoption of new tools takes time. Modern technology seems to have sped up the pace, but I’d estimate that the majority of the U.P. economy won’t see the full impact of AI usage for another 4-7 years from the time of this writing.
From an economic perspective, it will impact most white-collar industries first–i.e., office and administrative type functions, or anything that requires content generation or creation. As AI technologies become more common, they will be adapted for sophisticated purposes–i.e., trained on the specifics of business or industries–and they will become more affordable for the average U.P. business or user.
Testing laboratories will use them as a core part of analytic processes; manufacturing will use them to evaluate various process inputs and expand the use of robotics. Manufacturing industries will use them to automate operations beyond the levels used today. The adoption will be disruptive, as the increased efficiency and reduced risk the tools create for some hazardous or tedious functions will impact workforce needs, and shift some skillbases, causing displacement and job loss. The impacts could be severe, and exacerbate existing issues.
I stay connected to policy makers at every level of government and industry. I will forgo mentioning any specific examples out of courtesy to the anxious deliberations I know are underway. I have many ideas about how to mitigate the risks of these new tools, and I am glad to chat with anybody privately about how AI might be harnessed for their purposes.
Suffice to say, AI is already having an impact on the U.P. It is already here. It is not a mysterious entity to be feared, but simply another tool we human beings have created, and it will be innovatively applied to almost everything we do in the U.P.
Bravo! Thanks Bill for your detailed explanation of AI and how AI will effect the UP!
Thanks for the detailed, rational description. the unfortunate current naming convention will hopefully go away soon. As you’ve expertly written there is nothing intelligent going on with AI.
Here is a current favorite AI generated example:
It takes 3 towels three hours to dry on a clothes line, how long will it take 9 towels to dry on a clothes line?
AI answer: 9 hours. 3 times as many towels will take 3 times as much time.
Also, unfortunately, that simple example will still confuse people
Many thanks for this in-depth explanation.
Good job Bill!! I did not have any real insight into AI-just the old ” machines will take over the world ” gossip.
Thank you