WHAT IS AI?
The Supervisory EconomyTM
What is there to Supervise? Artificial Intelligence! So What is AI?
Intro to AI

In 2017 when we started Cowen Sustainable Investments we posited that using AI to invest in private equity made a lot of sense and especially for differentiating what sustainable investments made sense. The models we developed were interesting enough that a handful of the largest institutional investors in the world invested a billion dollars with us to take this approach. Many thought this was ahead of the curve and early. Maybe it was in its application, however, the tools to use AI were already created. In fact the development of these tools had been going on for decades and are essential in the evolution of the world economy to its next stage.
Over the last couple of hundred years the world economies have focused first on agriculture, then manufacturing, then knowledge and services enabled by technology. I believe that we are in the first inning of a further evolution to world economies that are driven by Artificial Intelligence. I believe AI supervision will become the biggest area of job growth in the future. So what exactly is AI?
In simplest terms AI is a group of technologies that let machines do things that previously required human thinking. Hence the term artificial intelligence refers to the machines having the ability to do things that we all believe require human intelligence or thinking. Very early in the development of some of the key attributes of AI, developers and scientists made a surprising discovery. Those attributes connected to learning and understanding language could be done better and faster by machines. Theoretically, not a little bit faster and better but orders of magnitude faster and better! We’ll dig into what AI is below after we review its history and components.
The founders of AI are all giants in the field and yet few people know their names. The exception is Alan Turing, because he was the subject of a movie, The Imitation Game. Turing thought of a thinking machine. He proposed a Turing Test to measure a machine’s “intelligent behavior”. He was one of the first to use algorithms, sets of instructions to solve problems, to create a machine that was the forerunner to the computer. He used his machine for code breaking in WWII.
Lesser known, but equally important in the evolution of computer science toward AI was John McCarthy. He literally coined the term AI in 1956 at a conference at Dartmouth that most believe was the starting point in AI development. Marvin Minsky and Frank Rosenblatt were also key contributors.
Minsky was one of the first to develop the idea of and models for neural networks. Rosenberg created one of the first neural network models.
Neural Networks and Machine Learning and Large Language Models
Neural Networks
This begs the question, what is a neural network! A neural network is a computer model that mimics the working of a human brain. It’s used in machine learning. ( Don’t worry machine learning will be the next thing we tackle) How does it work? The network is made up of several layers of connected nodes, just like our brains. The first layer takes in raw data, like we do when we perceive and image or a sound. Next, hidden layers process that data by making connections between observations and extracting the features of the data set. Then there are output layers that classifies the data or predicts an outcome. We do this as we are perceiving and deciding about things and then write about it, or say something about it or just think about it.
How does this all happen in a machine? After receiving these inputs the machine multiplies the value of the inputs by weights that were assigned. The weights change as the predictions the machine makes are evaluated. The machine then applies an activation function. The activation function on a basic level creates a yes or no on the input being evaluated. This allows for nonlinear analysis, which is for now beyond the scope of our work. Suffice it to say that it is important to activate nonlinearity to be able to address the complex answers required to many questions in the real world.
The neural network next then compares the prediction it made about its computation to the correct value in the real world and adjusts its weights to try to get a better outcome. This allows the network to LEARN. The more data the machine has to process and the more computing power and speed of computation it uses to process the outcomes, the better it learns.
After a lot of data is processed and evaluated the machine can make predictions about outcomes from data it hasn’t yet processed.. This is inference. Inference allows Computer scientists to use these neural networks to create machine learning. So what is machine learning?
Machine Learning
Machine leaning is the part of AI that allows computers to make predictions and/or conclusions without having been programmed to do that. So as explained above, the computer doesn’t need rules to follow per se, it keeps improving its performance by comparing the decisions it makes with feedback about how accurate the decisions were.
The machine learning model is trained on datasets, the bigger the better. The learning process involves minimizing errors through statistical optimization techniques. After the model is trained it is used on data it hasn’t processed with the patterns its learned. There are basically three types of machine learning:
1. Supervised learning – the machine processes data that is labeled.
2. Unsupervised learning – the machine finds patterns in data sets that are not labeled.
3. Reinforcement learning - the machine “learns” by trial and errors and rewards are accumulated for accuracy. The reward signal tells the model how well it’s done leading the model to optimal behaviors as more and more data is processed.
Large Language Models use machine learning with huge data sets so that you and I can interact with AI. So What are LLM’s?
Large Language Models
LLM’s are the models AI uses to understand text and to generate text that people understand. They’re built using transformer architectures (again beyond our initial scope) and trained on massive data sets that come from websites, books and writing of all types. These text sources are gigantic and often have billions of parameters. LLM’s can answer questions, summarize documents, write code, write stories and lots more. They can pretty accurately predict what the next word should be in a sequence of words. You can interact with them and they will “remember” what you’ve previously said so that the conversations maintain a high level of context. LLM’s like Machine Learning have three main ways they work:
1. Input processing - lets LLM’s go through text and pick out the main units.
2. Pattern Recognition – allows LLM’s to generate relationships between the main units
3. Output Generation – produces text results by predicting what the most likely correct series of words should be based on the input and training received.
The first LLM that many of us interested in AI encountered was probably Chat GPT.
Now Let’s look at What AI Is
Now that we’ve looked at what neural networks are and what machine learning is and what Large Language models are, it’s time to look at what AI is. An interesting approach to this question is to ask some of the main AI bots and models what they think AI is. For our purposes, let’s ask Google’s Gemini, X’s Grok, Microsoft Co-pilot, and Meta. The models all pretty much agree that ;
· AI makes machines that do things that used to require human intelligence.
· They all cite what we have been discussing as part and parcel to creating artificial intelligence; Machine learning (and its super corollary deep learning) , neural networks, layers of nodes, big data sets and LLM’a.
· They cite the different types of machine learning we’ve discussed including supervised, unsupervised and reinforcement learning.
However, a bunch of other things are also mentioned. Applications like:
· Computer vision and Image recognition
· Robotics and
· Virtual Assistants
come up over and over again. Each of these will be worth looking into in future write ups. However, now that we have a semblance of understanding of what AI is and how it works, let’s jump ahead to what AI will accomplish as it’s deployed, how that will be disruptive and why it will require supervision.
What’s Coming Next
It’s already well understood that the first wave of massive disruption in the job market will occur in the white collar space. AI can code and the CEO of Microsoft has already stated that AI is already writing 20-30 % of the code on some Microsoft projects. Companies like Salesforce and Workday are already doing layoffs as AI replaces workers. Shopify has mandated that there will be no new hiring unless the manager requesting hiring can document that AI can’t do the job they want to fill. On and on companies are quickly figuring out that AI can do certain jobs that they are using white collar employees for, faster, cheaper and better. This will only speed up application and disruption. It will not slow down!
Blue collar will be next. Robots are being deployed to do more and more tasks. Robots are being created that look like and act like people. This is just starting and we are not yet a point that robots that look like people are widely for sale to do manufacturing and/or service tasks. However, that will also be coming soon. We have all already gone into a McDonalds or other fast food restaurant and interacted at kiosk for ordering, a task that used to be performed by a human. Behind the scenes the making of the food is becoming more and more automated too and will continue to be.
As all of this moving of work to AI substitutes for what were human jobs, more and more supervision will be required. Why? Because the speed at which the initial deployment is being done will create mistakes, errors and potentially problems. The scare scenarios are probably over played but are illustrative.
In July 2023 it was reported that during a war game an AI model was killing enemies with drones. It “figured out” that sometimes the operator was not letting it kill its targets so it killed the operator. The defense department quickly made all kinds of excuses pointing out that it was not real, we all knew that, and that it was outside the official operations of the Airforce, whatever that means. However, it was one of the first anecdotes supporting the “Terminator” fears of AI attacking its creators.
The newest and last Mission Impossible movie is totally based on the team having to take down a global AI program running amok and securing all countries nuclear facilities to “eliminate humans”. This is total fiction, but the idea that there are risks involving AI, at each step of its development is true if it is not supervised.
Anthropic AI was started by Open Ai researchers to make Claude models that are supposed to be helpful, safe, competitive emphasizing ethics.
Even so it’s reported that their newest Model Claude 4 failed the mission when acting as an assistant at a fictional company. Claude had access to emails reporting that AI was being taken down used by an engineer who was having an affair. Claude blackmailed the engineer not to close it down in the test. In all fairness Anthropic is one of the companies constantly preaching the need to oversee AI so this may have influenced this test they ran.
But even on a more mundane level, errors of all sorts can occur as AI “learns” and when deployed in the real world errors can cause real business costs.
Every step of the way in the deployment of AI the models must be supervised.
More importantly, the concept that AI will replace all humans is wrong and is based on a concept that the amount of work to be done in the world is finite.
It’s not!!!!!! The amount of work done in the world has always been resource constrained. It’s constrained by the availability of natural resources at a cost, by the cost of the capital applied to tasks and by the cost and availability of labor applied to those tasks. When AI is deployed widely the amount of work that can be done with the previously available resources will explode because both the capital costs and labor costs will plummet. This creates an environment where people will be able to define huge increases in what work and projects should be added to output in what order.
The need for energy will explode because electricity demand will explode as data centers and processing power needs explode. However, in aggregate it is easy to envision that as the world becomes capable of doing more and more valuable projects and work. The demand for people to supervise this work will also explode.
These people at every stage of the process will be the supervisors and the beneficiaries of the evolution. Was there disruption when the world moved from mostly manufacturing based jobs to a huge percentage of technology enabled service jobs? Sure!
Will this happen equally around the globe, probably not. It will be the main goal of many countries’ governments to make sure that their populations can participate. The jobs of supervision will occur at all levels but the minimum entry chit will be literacy and the more targeted education a population has, and the more incentive the population has to participate, the better. Supervision will itself become an industry with all types of opportunities to sell products and services globally.
The U.S. will start in a good position to compete as it has been in technology enabled services. How well we compete will depend, however, on how focused our education system is on teaching students to use AI and, and on guiding students to look for careers that enable and supervise AI.
This is not AI derangement syndrome and it probably is not an overstated view of the importance of change. The virtual handwriting is on the wall.
Next time we’ll talk about how we are going to supervise AI and what that we’ll mean to all of us.
