Summary
This section intends to address three specific aspects of the Human contribution to society and the environment in the future, in collaboration with Artificial Intelligence.
The three dimensions we’ll explore are: Knowledge, Reliability and Trust.
Introduction
Why is this relevant right now? AI is changing how we learn, how we prepare young people for a career, and how those in mid-career adapt and leverage the technology that is available. Employers, companies doing business, are already benefiting from AI and the efficiency that can achieved, at time threatening portions or all of some jobs.
Knowing what the Human Leverage is, how this defines our contribution and developing the skills identified, will increase our chances to grow and evolve at the same pace technology is becoming available.
In leading AI technology development companies, more than half of the requirements for open jobs are unique to a human. IF we assume that leading AI companies utilize all its current capability, we can conclude that the traits required in employees to perform these jobs are the human leverage a person brings to this context.
But delineating with precision the Human Leverage and the AI Leverage, more importantly the relationship between both, is impossible at any given time. Both technology and humanity continue to evolve and change at great speed. Instead, articulating the relationship between Humans and AI, can be more constructive and shed light on what the future co-existence might look like.
The optimistic view of the future, in front of a transformational technology, is to treat AI as a collaborative partner where there is a hierarchy determined by the Human in the partnership. By empowering the human in the determination of what is our value, leverage, and in what ways can AI/Agent add value to achieve the goals established, we can optimize the outcome to serve the common good.
KNOWLEDGE
Today, large corporations like NVidia, Google, Microsoft, Apple and others, spend billions to define the laws and ethical compass that would serve as guardrails to keep Humans safe from the harm AI, guided by Humans, can cause to billions of people. The Artificial Intelligence of today has the knowledge we have given them and are able to learn on their own.
In the post-industrial world, time is equivalent to money — the less time spent on a task, the greater the efficiency, the faster production happens, and the sooner products and services reach the market. AI is accelerating this cycle dramatically, performing more tasks in less time and reshaping the value of human labor.
For example, to manufacture a car now takes about half of the time it did in 1970. A cell phone development today takes approximately one year, compared to two years or more in 1990. For manufacturing, the time saving is even more extreme. AI-driven robotics and predictive analytics now allow companies like Tesla and Toyota to produce vehicles with fewer assembly steps and higher precision. What once required days of manual calibration and inspection can now be done in minutes using computer vision and real-time quality control algorithms.
In consumer electronics, companies such as Apple and Samsung rely on AI-based simulations to test thousands of design variations before a physical prototype is even built, reducing product development cycles from years to months. In architecture and engineering, AI modeling tools can now generate hundreds of viable structural designs within hours — a process that once took entire teams weeks. In the media industry, generative tools compress post-production editing from months to days. Even in healthcare, AI-assisted drug discovery has shortened early-stage development from nearly five years to less than one, as demonstrated by the rapid design of mRNA vaccines.
These examples illustrate how AI amplifies human productivity, compressing the timeline between concept and completion. But they also highlight a deeper question: as AI accelerates progress, how do we ensure that speed does not outpace human wisdom?
Our knowledge begins to accumulate as soon as we are born. We learn who are our parents, when we feel hungry or tired. The colors, sounds and temperature become familiar and we recognize when these change, being inside or outside. Gaining knowledge is an endless process throughout our lives, and when formal education is introduced, the speed at which we learn depends on many factors including exposure to information and IQ. At first, we are spoon fed knowledge, the ABCs and basic math. Once we have a foundation, we learn to learn on our own and it is our self-motivation that inspires us to learn about specific aspects of life that are of more interest, ultimately defining careers, when we apply what we have learned.
Likewise, AI learns from us. We feed and build the intelligence with our input. As it evolves, the lessons accumulate and inform ‘new’ knowledge, similar to the learning process in humans.
But, does this mean AI’s reasoning is equally flawed to that of a human?
RELIABILITY
Perhaps one of the most important human skills of the future, is the ability to ask questions, probe, when collaborating with an AI tool, to enable it to perform more accurate research or do a better analysis. Asking better questions improves the reliability of the answer. Otherwise, the reliability of the tool is questionable. We risk a hallucination from the AI, described by ChatGPT as:
“In the context of AI responses, a hallucination refers to when an AI system (like ChatGPT or another language model) produces information that sounds plausible but is false, misleading, or entirely fabricated.
In simpler terms — it’s when the AI “makes something up” while presenting it as fact.” For example, an AI might confidently cite a research paper or a historical quote that doesn’t actually exist, or invent a statistic that seems credible but has no real source. In 2023, several lawyers in the United States were sanctioned after submitting court briefs written with AI assistance that contained fabricated legal cases — a striking reminder of how convincing, yet unreliable, these hallucinations can be.
It is our critical thinking, applied to formulating the best question/probe, that will minimize the risk of a hallucination. It is possible that some day, AI will question it’s accuracy.
In the same way that to confidently ride as passengers in a self-driven car, we must believe the car is equally or more reliable, safe, than when we drive it, AI must prove to be at least as reliable as a Human would be, when processing a task on our behalf. Believing that AI can be perfect, never err, is as false a belief as believing we humans can be perfect. Though, can self-driven cars be more reliable than humans, make less mistakes, get into fewer accidents? We don’t have enough experience to know this yet.
TRUST
How trusted are autonomous cars?
Although the actual data does not show that autonomous vehicles present a higher risk to the passengers, than those driven by Humans, it appears that the lack of familiarity, being something ‘new’ and experience, results in a perception that is distrusting of AI to drive a car on our behalf.
Among humans, trust is built over time. It has cultural dimensions and is one of the most complex human emotions — one that is felt rather than reasoned. We often just know who or what we trust, and sometimes we cannot explain why; there may be no logic behind it. Psychological research supports this intuition: studies have shown that people form trust judgments within seconds of meeting someone, often based on subtle cues like tone of voice, facial expression, or posture. Cross-cultural studies add another layer — for example, societies that emphasize collectivism, such as Japan or South Korea, tend to build trust through long-term relationships and shared group identity, while more individualistic cultures, like the United States, often rely on competence and performance as foundations for trust. Neuroscience, too, points to the hormone oxytocin — sometimes called the ‘trust chemical’ — which influences how we bond and cooperate with others. These findings remind us that trust is not merely cognitive but deeply emotional and physiological, woven into our social fabric.
When trust is mutual and we ask a question, and they don’t know the answer, they will say ‘I don’t know’. AI tools do not respond indicating it doesn’t know the answer! This might lead us to trust AI more than we trust a person that we don’t know? Since the relationship is new, our response to answers might vary, depending on who we are. This nuance complicates the relationship between a human and the AI tool. Those that have worked with a tool for a long time, programmers for example, might trust the tool more because they have more experience with the tool, they have taught it, and cross-referenced, tested the answers, made corrections.
Isn’t this the same experience as we have with humans, developing trust, with the only exception being that the tool doesn’t say ‘I don’t know’?