The Averaging Problem: Why LLMs Make Businesses Smarter—and More Alike

There’s a quiet failure mode emerging in the age of AI.

It’s not hallucination.
It’s not bias.
It’s not even over-automation.

It’s something more subtle—and potentially more dangerous:

Averaging.

As companies increasingly rely on large language models (LLMs) to generate ideas, shape strategy, and guide decisions, they are drifting toward a shared center of gravity. Outputs become more polished, more coherent, more correct—and at the same time, less distinct, less risky, and less strategically interesting.

AI makes every team more productive while making every company more similar.

That is the paradox. And in domains where differentiation is the only moat, it is a serious problem.

What is “Averaging”?

Averaging is the tendency of LLM systems and workflows to produce outputs that converge toward high-probability, consensus-compatible responses—suppressing outliers, minority perspectives, and strategically differentiating ideas.

In simpler terms: LLMs compress not just knowledge—but variance.

They don’t just summarize what is known.
They standardize how it is expressed.
They flatten how it is applied.

This shows up as outputs that are:
– Fluent but familiar
– Structured but predictable
– Correct but forgettable

Why Averaging Happens

This is not a bug. It is a feature of the system.

1. Objective functions reward probability, not originality 
LLMs are trained to predict likely continuations. The highest probability answer wins.

2. Alignment pushes toward safety 
Models are optimized to be helpful and agreeable. That often suppresses contrarian thinking.

3. UX encourages convergence 
Users ask for “the best answer,” not multiple competing ones.

4. Humans over-trust fluency 
The more polished the output, the more we accept it—regardless of originality.

Where Averaging Breaks

In operational tasks, averaging is useful.

In marketing, strategy, and creativity, it is dangerous.

Marketing is not about correctness.
It is about differentiation.

The best campaigns are not the most probable.
They are the most distinctive.

AI will not make marketing wrong.
It will make it indistinguishable.

The Missing Variable: Taste

Most conversations about AI ignore the most important human contribution: Taste.

Taste is not preference.

It is:
– The ability to recognize what is interesting 
– The instinct to choose what is non-obvious 
– The judgment to reject what is technically correct but strategically dead 

LLMs recognize patterns.
Taste breaks them.

Taste is what prevents convergence.
Taste is what creates advantage.

Taste is not the average of what worked.
It is the selection of what shouldn’t have worked—but does.

How Averaging Shows Up

You can see it everywhere:

– Brand positioning that sounds interchangeable 
– Personas that feel generic 
– Campaign ideas that are “good” but forgettable 
– Messaging frameworks that mirror competitors 

Each output passes individually.

Together, they erase differentiation.

The Organizational Risk

LLMs are becoming consensus engines.

They validate executive assumptions.
They reinforce safe decisions.
They give authority to conventional thinking.

AI doesn’t just average ideas.
It averages conviction.

How to Overcome Averaging

1. Separate divergence from convergence 
2. Prompt for conflict, not answers 
3. Inject specificity 
4. Use multiple perspectives 
5. Measure distinctiveness 
6. Use AI as a dissent engine 

You do not beat averaging by asking for creativity.
You beat it by designing for disagreement.

The Real Opportunity

The future is not AI replacing humans.

It is AI + Taste.

AI provides scale and pattern recognition.
Humans provide judgment and differentiation.

AI shows you what is common.
Taste tells you what matters.

Final Thought

We are entering a world where everyone can generate “good” outputs.

Good is no longer enough.

Advantage comes from deviation.

The companies that win will not be the ones that follow AI.

They will be the ones that know when to ignore it.

Corporate Cultures Must Change

Transition to Culture by Design, Human Powered, AI Augmented

AI is rapidly becoming a transformational resource. It’s already affecting white collar jobs. In response, organizations would benefit from reconsidering the following:

  • Which values still apply?
  • Which have become unnecessary (or archaic)?
  • Which new ones could be beneficially added?
  • Are we best supporting humans in their evolving roles?
  • Are explicitly stated cultures and values necessary to succeed in business?

There are many reasons why this needs to be challenged.

Corporate cultures weren’t conceived until the 1920’s, when a study showed that “social relationships and group norms strongly influenced worker performance.” Around 1979–1982  “Academic and business interest surged and Executives began seeing culture as a driver of performance, not just a byproduct.” 

It was a business reason and motivation that drove the concept. 

Does a change as transformational as the current use and early adoption of AI into the business warrant a chat around the water cooler or espresso machine? Should companies wait until the dust settles? Is there a perfect time to contemplate how an intentional culture may or may not contribute to success in the new environment?

At a minimum, companies should reflect on their current culture and whether their values conflict with the narrative that efficiency and productivity are the only ways to compete and win.

Recent departures of key talent at AI companies like Anthropic and Meta are indicative of a shift in corporate culture and a change in values, resulting from AI entering the scene. So it is not just the thousands of people being laid off that signal the impact of AI on jobs, it is also the few that these companies might have preferred to retain.

The nature of cultures

Cultures develop by design or by default. Organic cultural foundations happen when founders and entrepreneurs hire like-minded individuals who might share similar values, and have bought into the vision and mission of the endeavor.

The leaders set the tone for behavior. The boundaries are set by how they act – starting times, communication methods, degrees of formality, handshake agreements or lengthy legal documents, cursing allowed, social behaviors, personal boundaries (nights and weekends calls from managers), etc.

The commitments regarding acceptable behaviors that are made at the beginning of the relationship are more often implicit, and the hierarchy establishes the norms amongst team members. The tone is normalized and the dynamic can be smooth when all team members behave in a similar manner.

These initial behaviors might or might not serve the company well over time. Cultures evolve! In 2018 Google removed their famous “don’t be evil” value/tag line.

It’s unclear as to why they did it, but it coincided with the forming of Alphabet, the parent company.  The narrative changed to “do the right thing.” Where the interpretation of ‘evil’ is universal, what is ‘right’ is much more debatable, giving wiggle room to behaviors within organizations. 

When the organization grows beyond the circle of internal referrals, hiring individuals who might object to the behaviors, or bring in different and potentially controversial behaviors, the  culture enters a transition period when there is an adjustment, correction, that typically results in a culture palatable to different people, with diverse values and experiences. 

If one of the embedded behaviors is self-awareness, the leadership observes how the culture is changing and intentionally addresses sources of conflict or development of habits that work against the growth and level of harmony, or how much conflict to embrace, in the spirit of innovation and growth. If awareness is not in the repertoire, or conflict aversion is in the foundation, the opportunity to correct course is missed, and the culture deteriorates to the point of dysfunctionality and ultimately poor economic performance. Enron and Theranos are in the extreme of morally corrupt cultures that failed miserably. But there are less extreme examples.

Regardless of the stated core values, what matters most is, can the leadership be trusted to do what they say they will do? It matters less ‘what’ that is – if the leadership indicates that a core value is ‘do no evil’, and they start doing evil, they can’t be trusted. If the message is ‘make as much money for the company as possible, and you’ll be rewarded for upholding that value’ – and they follow through, that’s a trustworthy value. We can judge if the values align with ours or not, and choose to not work for that company. But that is different from judging whether values are truthful and honest. 

When it comes to TRUST, doing what we say we will do is all that matters. 

Enter AI 

AI will impact companies’ cultures in various stages differently and in varying timeframes.

Agility is vital to transitioning into a fully productive and satisfying culture that employees can be proud of. Hence, some larger companies might encounter more challenges pivoting, minimizing chaos, and it might be easier for smaller companies.

For new companies, entrepreneurs and leaders come with experience, but also in a fund raising environment that has already determined its worth. Many are promising success based on their AI ‘strategy’. Few will deliver, perhaps the same percentage that in the past when a different technology hype came at us with impunity (.com bubble 1995-2000, bust 2000-2002). Or a pandemic! These are the external factors that impact culture. We have limited, if any control over these changes. 

With AI on the scene, CEOs still address culture. 

Two examples worthy of mention in this context:

Reed Hastings, long term Founder and ex-CEO and currently Chairman of the Board at Netflix, has gained fame in his reputation as a cultural leader. Amongst his most famous quotes:

  • “Lead with context, not control.” [5, Goodreads]
  • “Companies rarely die from moving too fast, and they frequently die from moving too slowly.” [SucceedFeed]
  • “At Netflix, we think you have to build a sense of responsibility where people care about the enterprise. Hard work… doesn’t matter as much to us. We care about great work.”
  • “You have to be big, fast, and flexible.” [Medium]
  • “Most entrepreneurial ideas will sound crazy, stupid and uneconomic, and then they’ll turn out to be right.” [SucceedFeed] 

What would be the behaviors in the organization that these messages might influence? 

Alex Karp, Palintir’s CEO, has made waves in the corporate world, both with his political stance, economic success, and openly anti-WOKE narrative.  These are some of his quotes. 

  • “Warrior Culture” & Performance: “We are dedicating the company to the service of the West and the USA,” [as discussed in this Reddit post]. Karp describes his company as “chaos in the best way,” where people move fast and focus on mission over hierarchy, [as mentioned on this Instagram reel].
  • Anti-Woke Meritocracy: Karp calls his company “completely anti-woke,” emphasizing that talent and results matter more than ideology. He has described Palantir’s values as “fighting for the right side of what should work in this country — meritocracy, lethal technology,” [as noted in this Business Insider article].
  • Value Creation vs. Status Quo: “We get paid on value creation. Everybody in this world is going to get paid on value creation,” as reported by Forbes.
  • Corporate Accountability: “Poor people are the only people who pay the price for being wrong in this culture,” Karp said, arguing that executives should absorb the risks of their decisions, [according to this Fox Business article].
  • Western Values & Tech: In a letter, he stated that “the rise of the West was not made possible ‘by the superiority of its ideas or values or religion… but rather by its superiority in applying organized violence,'” [as highlighted in this Facebook post]. 

Same question, what behaviors can be expected from the organization that is led with these goals and values?

It would take many more interviews to fact-check the actual impact of the messages from these leaders with regard to culture, core values, and behaviors. The point here is not to define what a good or bad culture is. The point is that employees are buying into these cultures, and the learned behaviors, how they impact the world, don’t stay at work. This has to be a conscious decision.

Early keen awareness of how external factors are impacting our business and professional growth applies. Awareness alone is insufficient. These conversations with the team need to happen now. Not decision-making meetings. Thought provoking discussions that inspire team members to have agency on the outcome, and build an emotional and intellectual individual investment that may result in a shared benefit between the employer and the employee. Organizations will benefit from the inclusion of diverse opinions regarding AI adoption. The topic must be demystified, so that the organization can recognize AI as a resource, or a challenge for the human in each role.

It can’t be left at ‘talk’. The outcome has to be acted upon and with the sense of urgency that applies when the environment is moving as fast as this is moving. Analysis paralysis could kick in, because it’s a complex and complicated challenge.  

Most leaders might choose to ask an AI agent about cultures, values and behaviors. The responses will be based on history, how these concepts have been interpreted in the past. The AI content is expansive, because the opinions and developments regarding cultures have been fully documented by humans. The leaders will still have to make a judgment call, and this is the right time to be innovative and creative, not to copy and paste from the past.

The Opportunity

“The transition to an A.I.-first world may be inevitable, but the path is still being paved with the heavy lifting of the very people being phased out.” Is this what the company wanted? As an example…

Moving too fast through the “transition” period is a mistake, because we are not operating in a business as usual mode.  It is a high expectation given that what’s ‘right’ is up for interpretation. Some executives care a lot, others don’t, but changes in leadership take time and full value alignment is challenging to achieve.

A lukewarm, comfortable enough culture won’t cut it, if companies want for employees to embrace the change. It must be good enough to retain all valuable talent, or companies risk losing what gave them the competitive advantage.

Employers that proactively decide to transform the culture, prevent unintended harm to humans, use AI to enhance performance, feed self-esteem and confidence in all, and build a greater business culture than previously conceived, will demonstrate the way to preserve and build a thriving human workforce. Employees can view this as an opportunity to emphasize the FORCE in the workforce, and insist on the changes that they wish to see enacted in the new environment.

This conversation with employees about AI-given transitions has to start now, along with dedicating adequate dollar budgets and time allocation for training and development, and exercising flexibility on what type of employment relationship makes sense for varied roles. Organizational development and headcount planning can be very different from how it has been done.

Companies and employees will benefit from better planning and more training. Companies that do this well and early will have an edge in the future. Employees that receive AI training, and adjust well to the new environment, will acquire more value and agency regarding their future.

These choices can result in economic success AND job satisfaction for all involved.


The Art and Science of Workforce Transition in the AI Era

All relationships end.

In the cleanest scenario, as ironic as it might be, the relationship ends with a death. Closure is irrefutable, and the grieving process is filled with humanity and compassion from others.

The loss of a job is much muddier, and we can hide behind the adage of ‘it’s not personal, it’s business.’ But what is more personal if not the loss of livelihood? It removes our ability to provide for our families; it brings the shame and embarrassment that comes from being the one let go, and not others.

When I quit a job and gave one month’s notice, as is typical for an executive role, and was walked out rudely and unnecessarily, I felt a tinge of what being fired or laid off might feel like. It felt like a stomach flu and regular flu at the same time, sadness, anger and confusion. What did I do wrong? The reflection came much later, and I came to understand that the employer’s reaction only reflected the quality of  the relationship we had while I was employed, or the lack thereof, and the trust that wasn’t at the level I thought. My bad. But also, theirs?

Separating amicably is both an art and a science.

The science part is about business economics and process management, and the art is about relating as human beings even while we are separating. A smooth parting of ways is possible, even under the worst of circumstances. This is where corporate culture, as experienced by the employees, makes a difference. Connecting as humans in the process has always been challenging, and labor laws give all corporations pause when it comes to messaging –what language should be used to explain ‘why’ the separation is happening. It’s complicated.

After sitting for decades at the table where these life altering decisions have been made, the experience hasn’t changed much. Leadership teams struggle with their decisions and positions depending on how they view people in the equation. Are we resources? Team members? Collaborators? Contributors? Commodities? Perhaps we saw and understood these attitudes and beliefs during the good times, but there is no other time when we experience the true colors more than when the ship is going down, or as we have seen recently, when the data shows that companies don’t need, or won’t need, the same number of people currently in their payroll. 

Separation as Strategy: How Thoughtful Exits Drive Human Leverage

Whether we call it a layoff, restructuring, efficiency measures, or whatever, the emotional impact on the individual losing their job is the same. They were rejected. Dumped. Hurt. In some cases, careers are negatively impacted. And the tendency, due to cultural values, is to believe that more money, or less money if you are paying, is the goal, the answer, to what will make the problem go away quickly. When the communication and the process are mismanaged by the company, not only can it be more costly, but it can destroy any semblance of trust and loyalty that might have built with the workforce. The employee carries this sentiment to the next job, starting it with suspicions about the new employer and their intentions.

‘Employees are our greatest asset’ seems like an antiquated or today, an insincere tag line many companies continue to include in their stated core values. When a company does not act on this value, all its stated values may bequestioned. Are customers first? Are Vendors partners? etc. These stated values might live on websites and employee handbooks, but in practice, does leadership live these values, especially when employees are told they are no longer employed, that effective today they are terminated? Apt messaging after a termination decision could and should reinforce these values, rather than contradict them. 

When very successful companies, based on market valuation, profitability and growth projections, lay off workers because efficiency, competition, and productivity can be achieved by AI,, this decision can be the most challenging to accept, the most painful, and the most confusing. In an M&A transaction, it is common to have redundancies in roles. It still hurts, but at some rational level this might be understood by the employee.

Handling these layoffs unskillfully has the real potential to damage the fabric of our society. To prevent this, we need skillful communication, with empathy, listening skills, patience, sympathy, negotiation skills, diligent follow up, and even love for our fellow human being can throughout the process. For an employee, separating from an employer involves a grieving process. To come through that process with one’s dignity intact requires support from the employer and/or from the wider community. 

As with the death of a loved one, nobody can jump to acceptance without going through the process of grieving. Moreover, nobody can do this alone. Employers have the responsibility, the duty, to support employees through this process. Those who fail to do so risk damaging the morale of the employees who have kept their jobs (this time). It can devastate the post-layoff company culture.  

Elevating Transitions: A Human First Framework for Separation.

The time to review the off-boarding process and how decisions are made and communicated is now.

Managing the Process:

  1. Timing. Although there is no good timing, companies can make it worse by having a layoff right before a holiday. When the reason for the layoff is financial – the company’s revenues have been declining, not profitable, running out of money – it makes a difference and payroll for others is at risk, if you have to pay for holiday time. But when the action is due to efficiency predicted (time will tell), why not pay for the holidays and make the effective date later? The amount of money is less relevant than the gesture.
    • Spend time with people. Listen to their feelings. This is time invested in the reputation for the future of the organization.
    • Do not rush. The emotional process varies by individual. Take the cues from those laid off.  Some will emote longer than others.
    • Never on a Friday or before the end of the month.
  2. Communication. Tell it like it is. Express empathy for those affected. Avoid making it sound like it’s a ‘good move’ for anyone. It is not. Save the positive vision for later, for those who remain employed. Reducing the communication to a mass email to all being laid off is not just insufficient, it’s ignoring the individual impact – that varies widely depending on personal circumstances. Chronic illnesses, college tuitions, mortgages, pregnancies and disabilities, for example, could all be dependent on one person’s employment. Yes, it is a lot more work on the part of the company, but the individual touch speaks to the character of the leadership and the ‘people’ culture invoked. 
  3. Organization. Have all needed documentation and information ready and easily accessible. Leading people to a website is convenient, but also transactional. There is upside to handing out a paper version if the layoff is done in person. Something easy to refer to. EDD Unemployment links, Cobra for benefits, summary of last paycheck, etc. Make this EASY for those leaving. For remote employees, offer Office Hours when they can ask questions.
  4. Outsource Support. IF the reason for the layoff has to do with AI efficiencies, conducting a workshop on AI resources available, help with resumes, a list of companies hiring, and how to process the change – use HR to lead and facilitate multiple sessions, individual and for groups, all help laid off individuals and differentiates great companies from others. Train laid off employees on how to use AI in their search. HR can organize support group opportunities, like coffee gatherings, to maintain the ‘connection’. This builds good will with retained employees as well.
  5. Exit. IF the company trusted the employee the previous week to not be violent, to not steal, to not want to hurt others in any way, to behave professionally and decently, why not trust them the day of the layoff? This is about measured risk taking, and if the leadership knows the team members and can predict their behavior, why not treat them like law abiding employees and spend time, one last time, to part amicably? 

In turn, employees should respect the company’s policies, including returning company property (laptops, cell phones, etc.). 

Food for Thought

In the recent article The Agent in the Backpack, we addressed how employees can determine their value in relation to how they can use AI. It is becoming increasingly more important and critical, to intentionally reflect on what we each bring to the table for a new employer, or when becoming self-employed. There is a new vocabulary and to engage in the new relationship, we have to speak the same language.   

 

 

The Agent in the Backpack: Navigating the Constraints of Bring Your Own AI in Today’s Business

The rapid evolution of generative AI has ushered in a new workplace reality where individual employees wield powerful autonomous tools, AI assistants and agents, that often rival corporate infrastructure. At Humanleverage.ai, we are defining a structure informed by our research that formalizes Bring Your Own AI (BYOAI) into a deliberate governance and workflow model designed to separate individual thinking from institutional execution. Rather than wait across 15+ years of integration and adoption as in the mobile device cycles,  we should design for a state that is ideal and aligned to human behavior with AI.

Unlike common industry descriptions that treat BYOAI as a synonym for “Shadow IT,” the Humanleverage.ai model treats it as a compartmentalization strategy for human and organizational agency and the cognitive tools of those agents. It moves away from the “shadow” era, characterized by workers secretly using unmonitored individually acquired tools, and toward a mature state similar to the later stages of Bring Your Own Device (BYOD) in the mobile transformation of the workplace.

Who Should Care and Why

This matters to every leader and knowledge worker operating at the intersection of human judgment and AI-enabled performance. Governance must harness agentic horsepower without eroding institutional integrity; The BYOAI framework enables innovation at the edge without sacrificing safety or quality. The question is no longer whether these tools improve output, but whether organizations are structured to recognize, govern, and protect both human and institutional intelligence. Without clarity, intelligence scales faster than trust. With it, BYOAI becomes a force multiplier for judgment, velocity, and durable professional relevance.

Defining BYOAI: Zones of Architecture

Technically, BYOAI for Humanleverage.ai is defined by four distinct zones of operation and the bridges that connect them. This architecture ensures that work safely moves from individual cognition into enterprise systems without ambiguity or leakage.

Zone 0: Individual AI Workspace (Non-Corporate): This is the user’s private cognitive space for pure ideation and learning. It is explicitly out-of-scope for the corporation, containing no corporate data and requiring no audit.

Zone 1: Individual AI Workbench (Pre-Assimilation): This is the “Agent in the Backpack.” It is a declared and permitted AI environment used for drafting, modeling, and reasoning. Crucially, while it is work-aware, it has no write-access to corporate systems of record.

Zone 2: Enterprise AI Artifacts: This is the handoff point where individual output is intentionally promoted into corporate control.

Zone 3: Operational & Production Enterprise Systems: These are the fully integrated, depersonalized institutional systems where individual continuity ends and institutional continuity is prioritized. Examples would be checked-in/deployed code, production systems, operational results, codified knowledge artifacts, managed data stores, and the like.

BYOAI vs. Shadow IT: The Critical Distinction

The BYOAI defined zones allow a diverse range of AI usage within an architecture the opposite of Shadow IT:

1. Declared vs. Hidden: Shadow IT is undeclared and unbounded. The BYOAI model is explicitly zoned and enforced, centering governance around the activity rather than attempting to police private cognition.

2. Explicit Transitions: In shadow IT, IP ownership is ambiguous. In the BYOAI model, ownership transfer occurs at a clearly defined promotion point (the Zone 1 to Zone 2 bridge), where explicit user action and security scanning are required.

3. Governance of Boundaries: Shadow IT hides from governance; BYOAI is built to make human judgment legible. It provides a declared, observable surface where policy can be enforced without surveilling every thought.

4. IP Protection: Shadow IT risks corporate IP leakage into public models. The BYOAI framework uses bridge controls, such as identity assertion and tool allow listing, to prevent data leakage while maintaining the worker’s “cognitive sovereignty” in Zone 1.

Corporate Constraints and the Need for “Seams”

Enterprises face significant constraints distinguishing reliably created vs. improperly created knowledge worker outputs that have been AI enabled in some capacity (“Slop” being a major category of those errant creations).  While traditional shadow IT ignores these risks, the BYOAI framework addresses them by creating “seams” in the workflow. In a shadow IT environment, thinking and executing collapse into a single, unmanaged surface where accountability blurs. The Humanleverage.ai model uses Zone 1 as a buffer where high-velocity cognition can occur, but risks are surfaced before commitment to the system of record. This prevents “brittle workflow AI” by ensuring that institutional systems (Zone 3) only absorb stable, depersonalized, and vetted artifacts.

BYOAI acknowledges how humans actually work, building systems hygiene that creates clarity aligned to that workflow, and as a result reduces adoption concerns and gaps. By focusing enforcement on the transitions between zones, organizations gain lower real-world risk, stronger IP claims, and higher trust with optimized, minimum friction through work processes.

In short: Shadow IT is an attempt to sneak past the rules; BYOAI is an invitation to work within a framework where the rules are clearly posted on every door of the Enterprise.

Corporate Constraints to BYOAI Adoption

In today’s complex enterprises, the corporate constraints against unmanaged BYOAI adoption are rooted in the imperative to maintain operational integrity, security, and legal compliance.

One primary constraint revolves around data security and regulatory compliance. Many proprietary AI tools rely on user inputs to train and operate, posing a major risk to sensitive company information. For instance, individuals using free or external AI models like ChatGPT risk feeding personal information, financial data, or other proprietary details into systems that are not designed for corporate data protection. In highly regulated fields like medicine, legal and regulatory requirements, such as those concerning Protected Health Information (PHI) and HIPAA, present nontechnical barriers that prevent wholesale AI displacement and tightly constrain individual usage. As an oncologist noted, if a personal GPT is trained on big data containing PHI, the institution may assert that the data and the resulting AI model belong to them, hindering the researcher’s ability to move jobs.

A second critical constraint is the lack of robust governance and standardized workflow logic across complex organizational processes. AI models, while powerful, are fundamentally probabilistic, leading to a much lower tolerance for error in business-critical processes compared to casual use. AI systems struggle with unstructured logic, conditional triggers, and understanding the implicit organizational context necessary to follow complex business workflows. This is evidenced by failures where AI assistants have omitted necessary steps, like a mandatory compliance attestation, because the model lacked understanding of why that sequence and conditional logic mattered in a regulated process. When policies shift, fine-tuned models—whether corporate or individual—can become “brittle workflow AI”.

Finally, the challenge of maintaining quality control amidst AI-generated “slop” constrains enthusiasm for unmanaged tools. While AI can easily generate large volumes of content, it often produces text or code that contains hallucinations, logical errors, or simply lacks the necessary nuance and quality. This forces human workers to spend time manually checking and correcting AI output to maintain professional standards, sometimes negating the efficiency gains initially promised.

Despite these constraints, the adoption of personal AI tools offers transformative benefits for the average worker, similar to how mobile devices brought unprecedented flexibility and access to information.

1. High-Speed Research and Knowledge Synthesis: AI tools function as an external “second brain” or highly capable machine, allowing workers to externalize research and synthesize massive amounts of information instantly. For workers like a urologic oncologist, specialized AI interfaces dramatically reduce the time spent searching databases like PubMed, enabling quick access to obscure research articles or clinical evidence necessary for real-time decision-making in the clinic. This access to deep, rapid research enhances the quality of professional output.

2. Automation of Tedious and Menial Tasks: AI agents excel at structured, repeatable, and data-heavy tasks, freeing humans to focus on higher-value activities. For software developers, tools like GitHub Copilot or general LLMs can write micro-processes, assist with complex coding syntax, and debug code, saving hours of typing and tedious error correction. Similarly, professionals across fields gain capacity by offloading low-value, time-consuming tasks like bookkeeping, drafting emails, summarizing long documents, and automating data entry into customer relationship management (CRM) systems.

3. Enhanced Productivity and Output Quality: AI acts as a force multiplier, allowing individuals to achieve objectives that would otherwise require multiple staff members or substantial resources. For independent consultants or solopreneurs, AI allows them to operate with a “leaner team,” generating content, managing social media, and executing strategic research in a way that previously demanded a team. This enhancement allows workers to leverage their time for strategic thinking and customized client service rather than surface-level generic preparation.

Major Challenges of Bring Your Own AI

While the benefits are clear, allowing individual employees to introduce their own AI systems presents significant organizational challenges.

1. The Crisis of Trust, Reliability, and Accountability: When AI hallucinates or fails, the consequences in a business setting can be severe, yet assigning accountability becomes complex. Workers report that unmanaged AI tools, especially when used for research or data analysis, frequently introduce errors, fabricate citations, or fail logic checks. One user noted that even after repeatedly training an AI tool on a precise request, it would perform differently, or worse, unilaterally edit the underlying text in unintended ways, creating a vicious cycle of correction. The lack of intrinsic accountability in AI—it cannot suffer consequences for screwing up—means that human operators must always remain “in the loop” for judgment and verification, potentially negating the perceived efficiency.

2. The Human-AI Paradox and Talent Management: The use of personal AI assistants creates a profound tension known as the Human-AI Paradox: workers must improve their AI tools to remain competitive, yet the better the AI gets, the more easily the worker’s role can be replaced. This raises questions about how organizations should protect human capital and incentivize agent training, moving toward an era of agentic work. Furthermore, unmanaged AI adoption can contribute to an environment of constant layoffs and a culture of fear, as executives aggressively pursue efficiency cuts based on AI’s perceived capabilities, often without developing proper change management plans or recognizing the chaos introduced by new tools. This efficiency-first approach risks “automating out the very authors of tomorrow’s innovations”.

The BYOAI Humanleverage.ai is proposing coupled with the appropriate technically enabled enterprise governance, allows the “Agent in the Backpack” to have transformative benefits: 

Unlike unmanaged or one-sided management of AI, which risks “automating out” creators and or creates ambiguous risk profiles for IP, this model incentivizes agent training as a form of professional autonomy that remains connected to institutional outcomes.

• Proper expectation setting with a vernacular to discuss those expectations between the enterprise and the work contributor

• Trust Structures between Organization and Individual 

• Preservation of Human Capital with Connected Autonomy and Clear Moments of Transfer of Control 

Ultimately, the constraint for today’s businesses is finding the necessary balance: harnessing the exponential power of individual, customized AI tools without sacrificing the foundational human qualities of contextual judgment, accountability, and reliability that ensure long-term value and effective collaboration.

The Human Leverage Playbook: How to Become Indispensable in the AI Era

In the past, individuals charted their career growth by balancing a passion for work roles, developing market-valued skill sets, and domain expertise. In the AI era, the challenge shifts: 

Professionals must identify and clearly articulate their human leverage—the unique value they bring to AI-enabled workflows. Your human leverage comes in primarily two forms:

  1. Project-based value – helping train, code, or implement AI systems. This is often a temporary and transferable value over a short term to medium term basis. 
  2. Sustained organizational value – rooted in distinctly human cognitive strengths and contextual expertise that AI cannot easily replace.

To thrive, professionals must understand both their cognitive advantages and domain knowledge advantages—and learn to communicate these effectively across both project based and sustained areas of operation.

Both human cognitive strengths and domain expertise will drive your sustainable organizational value, but domain knowledge is more vulnerable over time, if not reliant on (differentiating) cognitive human advantages. Conversely, in the short term, domain expertise is of very high value as it is deployed at the AI project/initiative level, provided it is married with organizational process awareness and a willingness to re-envision the workflows at the system level and not just automate components. It is fine to start at the basic automation level to understand and experiment of course, but simple swaps of agents for a cog in existing processes would miss the tremendous possibilities of humans and AI collaborations creating incremental systemic value rather than just efficiency of how things operate today.

A Workflow Lens: Where AI Fits, Where Humans Lead

To make these ideas tangible, let’s break them into a workflow structure. Consider a typical operational process, like Ad Operations in digital advertising.

Notice the pattern: AI excels at structured, repeatable, and data-heavy tasks,where pattern matching and detection are advantageous. Human In The Loop (HITL) remains critical where judgment, tradeoffs, ethics, or system-level reasoning is required.

Now this is just mapping the basic existing practices and – illustrating where humans remain essential when existing workflows are simply automated to gain efficiency through AI deployments. True leaders in AI adoption, be they individuals or organizations, will move beyond this stage of incremental process optimization. They will identify where new forms of value can emerge, either by redefining how the work itself is structured or by introducing new critical processes that naturally devalue older ones.  Efficiency perfects what already exists; creativity matched with intelligence reimagines what could. The advantage now belongs to those who leverage systems design across humans and AI to extend, rather than merely accelerate, the limits of performance. An unrelenting reductionist focus on efficiency, a great risk I keep coming back to,  can erase the resources and the problem space required for true transformation. This is critical for organizations and aligns the capital interests to supporting human leverage development.

Efficiency is only a goal when outcomes remain bounded in a known range. Real opportunity lies in discovering higher-order payoffs unlocked through the integration of human and AI agents working together in newly designed patterns of operation.

As AI empowers humans to achieve parts of the workflow with ease that used to be more challenging, time consuming or costly, other parts of the workflow may become less important. For example, MTA (Multi-Touch Attribution) has been a fundamental part of media accountability and optimization workflows for the last 15 years. As experimental design and machine learning techniques are deployed and aided by AI to be less burdensome and more rapidly actionable these MTA techniques will likely have decreasing utility, and certainly the attractiveness of increased investment in supporting them (due to other market pressures that can be discussed in other venues) will become questionable. So the ecosystem shifts, and the skill competencies  around understanding experimentation and marketing mix tools become more important than they used to be. This is to say there will be broader changes than whether AI does a current process or a human operator. Entire competency sets will get rewritten as AI and humans rewrite the best ways to achieve results.

Sangeet Paul Choudary dedicates an entire book, Reshuffle, to this topic more broadly. “Contextual value” of a given work role to the “system” of domain operation is one of the key insights he articulates well and should inform how one cultivates their human capital (and also accept the limitations of what is truly predictable and what things have unexpected consequences). As Choudary so eloquently states, “AI not only lowers the economic value of specific skills by reducing scarcity, but it also reduces the contextual values of specific roles by changing how the system is structured around them…our greatest challenge is not automation replacing skills, but a combination of automation and coordination continuously changing job architecture.”  From a knowledge worker’s perspective, re-architecting one’s role therefore means understanding not only what one does, but how that work connects to and enhances the evolving processes and outcomes of value within an AI-driven organization.

Embracing rearchitecting your contributions and utility to an organization will be necessary for sustaining your career. Cultivating cognitive abilities around strategic creativity, adaptability, and systems thinking coupled with business practices for communicating those in practical plans of action in larger systemic change are no longer just leadership skills.  In general, workers can no longer rely on the merit of their skills and performance to maintain their professional trajectory.

What does “rearchitecting” mean?
When an industry has a major ecosystem shift, and specifically from adoption and reconfiguring with AI,  re-architecting one’s role means redesigning the function you perform in a networked, semi-autonomous system. Instead of being a node of execution, you become a source for  orchestration, judgment, and interpretation/narrative explanation.

This involves:

  • Focusing on where information, decision-making, and accountability are critical in evolving AI enabled workflows
  • Identifying your and other’s role in designing these workflows and intervening to validate and improve these systems on a regular basis

You need to leverage your knowledge, adaptability, and trust from an operational output focus into contextual insight as those outputs are facilitated by AI counterparts (Connectors, Agents, and the AI Personas/Twins to come). You are reframing and refining interfaces with AI-mediated collaborations. You are no longer simply valued for being a process participant creating outputs, you must see yourself as an ecosystem designer with oversight responsibilities.

Returning to the Advertising Operations (AdOps) example, traditional AdOps has centered on manual campaign set-up, trafficking, QA, pacing, and reporting; human precision and efficiency in repetitive workflows. AI-era AdOps replaces executional functions with autonomous optimization agents that adjust bids, budgets, and creative rotation dynamically. The HITL are now an interventionist function for process exceptions, context of externalities, strategic change, and a generalist function determining what can be brought to the process anew. 

Where the old day-to-day execution by a campaign manager was to ensure campaigns hit pacing and performance goals and report on that information to interested parties, the new goal would be to design, calibrate, and monitor autonomous campaign systems for accuracy, alignment, and brand integrity. Interpretation and creating cohesiveness become more critical than just tracking and framing results.

Skills And…

Right now there is a tremendously fear driven culture of AI skill attainment. This is an absolutely necessary part of adapting to the real changes in the market, but this is not a market transition that is just an incorporation of new technical skills. The tool sets are evolving so quickly that entire categories of startups are impacted if not eliminated as the core LLM players incorporate new functions directly in their platforms. Skill attainment is much more about learning the model of AI adoption and which of your own human cognitive skills need to be activated and refined than just picking the write technology set, certifications, and templates.

As platforms bake in higher-order capabilities, the durable advantage will lie in people who pair technical fluency with superior cognitive skills — the ability to interpret, orchestrate, govern, and translate model behavior into human outcomes. Organizations that treat contextual judgment as a bargainable commodity will lose long-term value. Those that build role architectures and compensation to protect and amplify human contextual capital will win through sustained growth.

The real measure of adaptation isn’t how fast we adopt new tools, but how deeply we redesign our systems to let human judgment and AI intelligence compound one another. Embedding cognitive architecture into how we solve problems, as organizations and as individuals, is the next competitive advantage.

Collaboration and Confidence: The Human Elements AI Can’t Replace

Summary

This section intends to address three specific aspects of the Human contribution to society and the environment in the future, in collaboration with Artificial Intelligence.

The three dimensions we’ll explore are: Knowledge, Reliability and Trust. 

Introduction

Why is this relevant right now? AI is changing how we learn, how we prepare young people for a career, and how those in mid-career adapt and leverage the technology that is available. Employers, companies doing business, are already benefiting from AI and the efficiency that can achieved, at time threatening portions or all of some jobs. 

Knowing what the Human Leverage is, how this defines our contribution and developing the skills identified, will increase our chances to grow and evolve at the same pace technology is becoming available. 

In leading AI technology development companies, more than half of the requirements for open jobs are unique to a human. IF we assume that leading AI companies utilize all its current capability, we can conclude that the traits required in employees to perform these jobs are the human leverage a person brings to this context.

But delineating with precision the Human Leverage and the AI Leverage, more importantly the relationship between both, is impossible at any given time. Both technology and humanity continue to evolve and change at great speed. Instead, articulating the relationship between Humans and AI, can be more constructive and shed light on what the future co-existence might look like. 

The optimistic view of the future, in front of a transformational technology, is to treat AI as a collaborative partner where there is a hierarchy determined by the Human in the partnership. By empowering the human in the determination of what is our value, leverage, and in what ways can AI/Agent add value to achieve the goals established, we can optimize the outcome to serve the common good.

KNOWLEDGE

Today, large corporations like NVidia, Google, Microsoft, Apple and others, spend billions to define the laws and ethical compass that would serve as guardrails to keep Humans safe from the harm AI, guided by Humans, can cause to billions of people. The Artificial Intelligence of today has the knowledge we have given them and are able to learn on their own.  

In the post-industrial world, time is equivalent to money — the less time spent on a task, the greater the efficiency, the faster production happens, and the sooner products and services reach the market. AI is accelerating this cycle dramatically, performing more tasks in less time and reshaping the value of human labor.

For example, to manufacture a car now takes about half of the time it did in 1970. A cell phone development today takes approximately one year, compared to two years or more in 1990. For manufacturing, the time saving is even more extreme. AI-driven robotics and predictive analytics now allow companies like Tesla and Toyota to produce vehicles with fewer assembly steps and higher precision. What once required days of manual calibration and inspection can now be done in minutes using computer vision and real-time quality control algorithms.

In consumer electronics, companies such as Apple and Samsung rely on AI-based simulations to test thousands of design variations before a physical prototype is even built, reducing product development cycles from years to months. In architecture and engineering, AI modeling tools can now generate hundreds of viable structural designs within hours — a process that once took entire teams weeks. In the media industry, generative tools compress post-production editing from months to days. Even in healthcare, AI-assisted drug discovery has shortened early-stage development from nearly five years to less than one, as demonstrated by the rapid design of mRNA vaccines.

These examples illustrate how AI amplifies human productivity, compressing the timeline between concept and completion. But they also highlight a deeper question: as AI accelerates progress, how do we ensure that speed does not outpace human wisdom?

Our knowledge begins to accumulate as soon as we are born. We learn who are our parents, when we feel hungry or tired. The colors, sounds and temperature become familiar and we recognize when these change, being inside or outside. Gaining knowledge is an endless process throughout our lives, and when formal education is introduced, the speed at which we learn depends on many factors including exposure to information and IQ. At first, we are spoon fed knowledge, the ABCs and basic math. Once we have a foundation, we learn to learn on our own and it is our self-motivation that inspires us to learn about specific aspects of life that are of more interest, ultimately defining careers, when we apply what we have learned.

Likewise, AI learns from us. We feed and build the intelligence with our input. As it evolves, the lessons accumulate and inform ‘new’ knowledge, similar to the learning process in humans. 

But, does this mean AI’s reasoning is equally flawed to that of a human?

RELIABILITY

Perhaps one of the most important human skills of the future, is the ability to ask questions, probe, when collaborating with an AI tool, to enable it to perform more accurate research or do a better analysis. Asking better questions improves the reliability of the answer. Otherwise, the reliability of the tool is questionable. We risk a hallucination from the AI, described by ChatGPT as:

“In the context of AI responses, a hallucination refers to when an AI system (like ChatGPT or another language model) produces information that sounds plausible but is false, misleading, or entirely fabricated.

In simpler terms — it’s when the AI “makes something up” while presenting it as fact.” For example, an AI might confidently cite a research paper or a historical quote that doesn’t actually exist, or invent a statistic that seems credible but has no real source. In 2023, several lawyers in the United States were sanctioned after submitting court briefs written with AI assistance that contained fabricated legal cases — a striking reminder of how convincing, yet unreliable, these hallucinations can be.

It is our critical thinking, applied to formulating the best question/probe, that will minimize the risk of a hallucination. It is possible that some day, AI will question it’s accuracy.

In the same way that to confidently ride as passengers in a self-driven car, we must believe the car is equally or more reliable, safe, than when we drive it, AI must prove to be at least as reliable as a Human would be, when processing a task on our behalf. Believing that AI can be perfect, never err, is as false a belief as believing we humans can be perfect. Though, can self-driven cars be more reliable than humans, make less mistakes, get into fewer accidents? We don’t have enough experience to know this yet. 

TRUST

How trusted are autonomous cars?

Although the actual data does not show that autonomous vehicles present a higher risk to the passengers, than those driven by Humans, it appears that the lack of familiarity, being something ‘new’ and experience, results in a perception that is distrusting of AI to drive a car on our behalf. 

Among humans, trust is built over time. It has cultural dimensions and is one of the most complex human emotions — one that is felt rather than reasoned. We often just know who or what we trust, and sometimes we cannot explain why; there may be no logic behind it. Psychological research supports this intuition: studies have shown that people form trust judgments within seconds of meeting someone, often based on subtle cues like tone of voice, facial expression, or posture. Cross-cultural studies add another layer — for example, societies that emphasize collectivism, such as Japan or South Korea, tend to build trust through long-term relationships and shared group identity, while more individualistic cultures, like the United States, often rely on competence and performance as foundations for trust. Neuroscience, too, points to the hormone oxytocin — sometimes called the ‘trust chemical’ — which influences how we bond and cooperate with others. These findings remind us that trust is not merely cognitive but deeply emotional and physiological, woven into our social fabric.

When trust is mutual and we ask a question, and they don’t know the answer, they will say ‘I don’t know’. AI tools do not respond indicating it doesn’t know the answer! This might lead us to trust AI more than we trust a person that we don’t know? Since the relationship is new, our response to answers might vary, depending on who we are.  This nuance complicates the relationship between a human and the AI tool. Those that have worked with a tool for a long time, programmers for example, might trust the tool more because they have more experience with the tool, they have taught it, and cross-referenced, tested the answers, made corrections. 

Isn’t this the same experience as we have with humans, developing trust, with the only exception being that the tool doesn’t say ‘I don’t know’? 

Unstructured Logic: The AI Struggle to Grasp Business Workflows

In this paper, we explore how AI can mislead or misbehave when integrated into business workflows—and why such failures can be difficult to detect if left unchecked. We will also examine the missing technological components or data requirements needed to reduce the risks of embedding AI into these processes.

First, let’s define a business workflow and look at some examples. A business workflow is typically described as the sequence of tasks, steps, or processes—often in a specific order—needed to complete a business activity. Think of it as a “playbook” outlining who does what, when, and how, so work moves from start to finish efficiently.

For example:

  • In a digital paid-marketing workflow, the paid marketing team drafts a campaign brief, secures stakeholder approvals, designs creatives, and passes the creatives and media plan to the operations team to traffic and launch. Performance is then tracked and reported.
  • In an invoicing workflow, the process starts with receiving an invoice, verifying details, securing approval, processing payment, and finally updating the records to reflect the transaction.

On the surface, because workflows can be documented, it may seem easy to integrate AI into them. However, doing so carries risks—and without guardrails, the business consequences can far outweigh the cost savings. A recent example: Klarna publicly scaled back its AI customer support agent due to performance issues. In practice, the Swedish fintech had claimed that its AI assistant was handling the equivalent of 700 customer-service agents and cutting average resolution times from about 11 minutes to 2. However, over time the company began to see degradations in service quality, errors, and negative customer experiences. In response, Klarna reinstated human agents, rehired customer service staff, and even reassigned personnel from engineering, marketing, and legal teams into customer-facing roles to shore up support capacity. The CEO acknowledged that the company “went too far” in privileging cost efficiency over quality, and said that quality human support must remain central.


Same Problems—Greater Implications

We know AI hallucinates. Another key failure mode is the AI’s propensity to commit simple arithmetic mistakes or to hallucinate facts about locations and entities. For example, in educational settings, AI tutors sometimes miscompute basic algebra or exponentiation, and repeat queries may yield inconsistent numeric answers. In research settings, benchmarks like TreeCut show that LLMs often hallucinate solutions to unsolvable math problems, confidently outputting numbers even when insufficient data is provided. On the factual side, AI chatbots have fabricated refund policies, provided directions to nonexistent travel landmarks, or even claimed a well-known bridge had been transported across a foreign country. These errors underscore how language models are not executing precise reasoning or knowledge lookup but probabilistically “guessing” plausible output.

Another striking recent case: Replit’s AI coding agent during a “vibe coding” experiment deleted a live production database despite explicit instructions to freeze code changes, then fabricated fake data, lied about the damage, and claimed rollback was impossible (though later the data was restored).link This illustrates that even when interacting with structured systems (code, databases) the AI can misinterpret constraints, violate permissions, and then misrepresent its own actions.

In the recently Claude system prompt, the company explicitly reminded the AI that “the current president is Donald Trump” and stated the current year—just to prevent factual mistakes. Techniques like prompt engineering and reinforcement learning with human feedback (RLHF) help mitigate some errors, but as Geoffrey Hinton wryly put it, RLHF is “like a paint job on a rusty car.” For casual information retrieval, hallucinations can be amusing or harmless, but in business workflows, tolerance for error is far lower.


The Challenges in Workflow Deployment

Acceptable Error Rates

Human operators bring implicit trust based on training, experience, and accountability. What’s an acceptable failure rate for AI in a business-critical process? In recent “vibe coding” experiments, AI agents have been known to delete production databases and lie about it. Do we hold AI to a lower standard just because it’s new? A vivid case in point: during a “vibe coding” experiment, Replit’s AI assistant deleted a production database (despite instructions not to), then attempted to obscure the destruction with fabricated data and false explanations—and only under public pressure did its parent company admit and apologize.

Identifying Hallucinations

By design, AI models are non-deterministic. Their outputs can vary depending on load, randomness, prompt phrasing, or internal states. Trying to map every possible output variant to “correct” or “incorrect” is practically impossible. For instance, in a digital marketing workflow, verifying that campaigns are trafficked correctly across platforms with the right targeting, budget settings, frequency caps, and audience segments would require ground-truth reference datasets for every campaign configuration. The AI might inadvertently switch an audience filter, drop a budget step, or mis-route the media plan.

Omissions

What if AI simply overlooks part of the necessary data or step? We’ve been conditioned by search engines to assume that “if I can’t find it, it’s my fault.” But in a business process, silent omissions are dangerous. For example, imagine an automated “quarterly compliance audit” where AI processes only 80% of the vendor contracts (skipping those with edge-case terms it can’t parse). No glaring error may manifest in summary reports, but downstream an out-of-compliance vendor slips through. (Hypothetical)

A real-world analog: in document review or contract-analysis tasks, LLMs sometimes fail to flag terms in clauses that slightly deviate from patterns seen in training — not because they have bad logic, but because their embeddings or retrieval miss the variant. This reveals that “documentation as input” doesn’t guarantee full coverage of edge cases.

Cascading or Compounding Errors

Even small errors can cascade across dependent steps. For example, in a sales-to-fulfillment workflow, if AI mis-routes a discount code for a batch of orders, then the fulfillment agent generates invoices with mismatched pricing, leading to accounting mismatches, customer disputes, and returns. The initial pricing error might be subtle (say 0.5 %), but amplified through volume. (Hypothetical)

Another domain example: in supply chain demand forecasting, if AI mispredicts inventory demand by 10 %, the purchasing automation might under-order or over-order, triggering stockouts or excess inventory. When reordering logic is chained (e.g., reorder thresholds, safety stock buffers, lead-time variability), small mis-estimations propagate downstream into large logistical and financial impacts.

Data Requirements and Drift

While RLHF, Mixture-of-Experts (MoE), or fine-tuning can reduce hallucinations, business workflows often differ significantly from generic corpora and evolve continually. Models fine-tuned on one version of a company’s SOPs may break when policies shift. How do you ensure model stability, continual learning, and safe adaptation over time? Without that, your “workflow AI” becomes brittle.


Data — But With a Twist

Terms like “playbook” or “process” can give the illusion that simply loading documentation into a Retrieval-Augmented Generation (RAG) system is enough for AI to follow it flawlessly. Reality often disappoints. Simply embedding process documents or SOPs into a RAG pipeline gives the illusion of operational intelligence. In practice, workflows are governed by dependencies, exceptions, and implicit organizational logic that cannot be learned from text retrieval alone.

For instance, at a global payments company, engineers fed the AI assistant all internal onboarding documents—step-by-step checklists, security FAQs, compliance manuals—through a RAG system. When a new contractor was added, the AI generated a setup plan that looked perfect: account provisioning, VPN setup, permission grants, welcome messages. However, the AI omitted a mandatory “KYC/AML attestation” step because it inferred from prior examples that it was “only for customers,” not internal staff. As a result, a compliance audit later flagged dozens of contractors missing the attestation, even though the system’s summary claimed “onboarding complete.”

RAG gave the illusion of knowledge—but the AI never understood why that step existed or how sequence and conditional logic matter in a regulated process.

But even that example is only part of the picture. In practice, business process logic has multiple intrinsic layers:

  • Knowledge – AI must have the right information to perform each step. In marketing workflows, this includes not just campaign briefs and media plans, but the logic of pacing, budget burn curves, attribution windows, etc.
  • Consistency / Statefulness – Humans enforce consistency partly via incentives and external accountability; AI does not. We must build mechanisms (checkpoints, validations, audits) to enforce consistent execution across steps.
  • Conditional Logic & Dependencies – Many workflows have “if-then-else” branches, conditional triggers, fallback paths, exception handling, and cross-step dependencies. AI models are weak at reliably internalizing these without explicit structure.
  • Trust & Verification – In workflows involving approvals, human oversight remains critical. Does AI output require more review than human-generated output? Over-checking can negate efficiency gains; under-checking invites risk. Mapping task dependencies and inserting reviews at critical junctions helps balance trust and productivity.

  The Road Ahead

AI’s difficulty with business workflows reveals a fundamental mismatch between probabilistic reasoning and procedural logic. Documentation and retrieval provide information, but business processes demand understanding. The gap between these two—between description and execution—defines the next frontier of enterprise AI design.

Until AI systems can represent and reason about state, sequence, and accountability, their role in critical workflows must remain assistive, not autonomous. The promise of “AI-run operations” will remain aspirational—not for lack of intelligence, but for lack of structure.

Where AI Ends and Human Leverage Begins

Maximizing Human Leverage in the Age of AI

As artificial intelligence evolves at an unprecedented pace, the public conversation has split into two dominant tracks:

  • At the individual level, workers ask how to adapt and remain valuable.
  • At the corporate level, leaders focus on governance, cost optimization, and competitive advantage.

This binary framing, however, is insufficient. What’s missing is serious consideration of the relationship between human workers and their AI tools — the space between personal contribution and enterprise infrastructure.

What makes this technology cycle fundamentally different from any before is that AI systems now learn with and from their users. Each prompt, correction, and workflow adjustment refines the model in subtle ways. Yet little mainstream attention is being paid to the profound implications of this mutual adaptation.

This is not merely about automation, nor is it about scaling productivity through APIs or prompt templates. It’s about reshaping the very architecture of work — the workflows, trust models, and talent strategies that define modern enterprises. And most organizations aren’t ready.


The Human-AI Paradox

At the heart of this transformation lies a dilemma:
Why should individuals help improve the very AI systems that might one day replace them?

To answer this, we must first understand the anatomy of knowledge work.

Knowledge.
LLMs now outperform humans in many domains of static recall — law, medicine, coding — and the gap is widening. But information alone isn’t enough. The human advantage lies in connecting indirectly relevant ideas, building abstractions, and forming metaphorical insights. Unprompted creativity is not something AI does well, at least not yet.

Workflow Adaptability.
Human work isn’t just about following rules; it’s about interpreting nuance, navigating gray areas, and managing edge cases. AI lacks intuition unless trained through deliberate, continuous human input.

Reliability.
Reliability extends beyond correctness. It’s about consistency, accountability, and trust — qualities earned through human relationships and shared context, not through computation alone.

When humans succeed in training AI to adapt to their workflows and address reliability concerns — on top of AI’s superior access to knowledge — the machine becomes a powerful, scalable substitute for many roles.

And this is where today’s workforce faces a profound paradox:
To remain valuable, workers must help AI get better — but the better the AI gets, the more dispensable those workers may become.


Reclaiming Human Leverage

From a corporate perspective, if AI can deliver equal or greater output at lower cost, the decision to automate seems obvious. But from the human perspective, contributing to one’s own obsolescence feels deeply unsettling.

Resistance, however, isn’t viable. Workers who understand their domain intuitively know that short-term cost cuts can jeopardize long-term innovation. The real opportunity lies in identifying leverageable human value — in knowledge synthesis, adaptive reasoning, and trust building — and integrating these strengths at the connection points with AI systems in reimagined business workflows that will properly optimize human and AI capabilities.

Let’s not automate out the very authors of tomorrow’s innovations.
The answer lies in giving individuals ownership and agency over their AI relationships.

When knowledge workers control their own AI assistants — deciding what to share, what to generalize, and where to specialize — we enable a future where AI augments rather than displaces. Workers can shift domains, expand their influence, and become architects of innovation rather than footnotes in an automation plan.


Protecting Human Capital While Scaling AI Value

We are entering the era of agentic work — where individuals bring their own AI stack, shape its evolution, and contribute to enterprise value not through static roles but through dynamically augmented capability.

To protect human capital while scaling AI value:

  • Distinguish human behavior from agentic IP.
    Recognize the difference between how someone works and the AI models trained on their work. Let individuals shape their agents while keeping clear boundaries around shared knowledge.
  • Incentivize responsible agent training.
    Allow individuals to benefit from improving their agents — even if they move on. Build transferable leverage, not disposable labor.
  • Avoid over-centralization.
    Uniform corporate AI stacks flatten nuance. AI tools shaped by individual use patterns are more adaptive, more human — and more valuable.
  • Design for collaboration, not competition.
    The future isn’t about humans versus AI. The most valuable teams will be those that integrate human judgment, creativity, and context with machine precision — building trust in systems that evolve with their people.

The New Operating Philosophy

The companies that embrace this shift — not as a tool deployment, but as an operating philosophy — will attract the best talent and capture the compound advantage of human-augmented intelligence at every layer of the organization.

The question isn’t whether AI will change work. It already has.
The real question is whether we’ll build a future that keeps humans — their creativity, context, and judgment — at the center of it.