Corporate Cultures Must Change

Transition to Culture by Design, Human Powered, AI Augmented

AI is rapidly becoming a transformational resource. It’s already affecting white collar jobs. In response, organizations would benefit from reconsidering the following:

  • Which values still apply?
  • Which have become unnecessary (or archaic)?
  • Which new ones could be beneficially added?
  • Are we best supporting humans in their evolving roles?
  • Are explicitly stated cultures and values necessary to succeed in business?

There are many reasons why this needs to be challenged.

Corporate cultures weren’t conceived until the 1920’s, when a study showed that “social relationships and group norms strongly influenced worker performance.” Around 1979–1982  “Academic and business interest surged and Executives began seeing culture as a driver of performance, not just a byproduct.” 

It was a business reason and motivation that drove the concept. 

Does a change as transformational as the current use and early adoption of AI into the business warrant a chat around the water cooler or espresso machine? Should companies wait until the dust settles? Is there a perfect time to contemplate how an intentional culture may or may not contribute to success in the new environment?

At a minimum, companies should reflect on their current culture and whether their values conflict with the narrative that efficiency and productivity are the only ways to compete and win.

Recent departures of key talent at AI companies like Anthropic and Meta are indicative of a shift in corporate culture and a change in values, resulting from AI entering the scene. So it is not just the thousands of people being laid off that signal the impact of AI on jobs, it is also the few that these companies might have preferred to retain.

The nature of cultures

Cultures develop by design or by default. Organic cultural foundations happen when founders and entrepreneurs hire like-minded individuals who might share similar values, and have bought into the vision and mission of the endeavor.

The leaders set the tone for behavior. The boundaries are set by how they act – starting times, communication methods, degrees of formality, handshake agreements or lengthy legal documents, cursing allowed, social behaviors, personal boundaries (nights and weekends calls from managers), etc.

The commitments regarding acceptable behaviors that are made at the beginning of the relationship are more often implicit, and the hierarchy establishes the norms amongst team members. The tone is normalized and the dynamic can be smooth when all team members behave in a similar manner.

These initial behaviors might or might not serve the company well over time. Cultures evolve! In 2018 Google removed their famous “don’t be evil” value/tag line.

It’s unclear as to why they did it, but it coincided with the forming of Alphabet, the parent company.  The narrative changed to “do the right thing.” Where the interpretation of ‘evil’ is universal, what is ‘right’ is much more debatable, giving wiggle room to behaviors within organizations. 

When the organization grows beyond the circle of internal referrals, hiring individuals who might object to the behaviors, or bring in different and potentially controversial behaviors, the  culture enters a transition period when there is an adjustment, correction, that typically results in a culture palatable to different people, with diverse values and experiences. 

If one of the embedded behaviors is self-awareness, the leadership observes how the culture is changing and intentionally addresses sources of conflict or development of habits that work against the growth and level of harmony, or how much conflict to embrace, in the spirit of innovation and growth. If awareness is not in the repertoire, or conflict aversion is in the foundation, the opportunity to correct course is missed, and the culture deteriorates to the point of dysfunctionality and ultimately poor economic performance. Enron and Theranos are in the extreme of morally corrupt cultures that failed miserably. But there are less extreme examples.

Regardless of the stated core values, what matters most is, can the leadership be trusted to do what they say they will do? It matters less ‘what’ that is – if the leadership indicates that a core value is ‘do no evil’, and they start doing evil, they can’t be trusted. If the message is ‘make as much money for the company as possible, and you’ll be rewarded for upholding that value’ – and they follow through, that’s a trustworthy value. We can judge if the values align with ours or not, and choose to not work for that company. But that is different from judging whether values are truthful and honest. 

When it comes to TRUST, doing what we say we will do is all that matters. 

Enter AI 

AI will impact companies’ cultures in various stages differently and in varying timeframes.

Agility is vital to transitioning into a fully productive and satisfying culture that employees can be proud of. Hence, some larger companies might encounter more challenges pivoting, minimizing chaos, and it might be easier for smaller companies.

For new companies, entrepreneurs and leaders come with experience, but also in a fund raising environment that has already determined its worth. Many are promising success based on their AI ‘strategy’. Few will deliver, perhaps the same percentage that in the past when a different technology hype came at us with impunity (.com bubble 1995-2000, bust 2000-2002). Or a pandemic! These are the external factors that impact culture. We have limited, if any control over these changes. 

With AI on the scene, CEOs still address culture. 

Two examples worthy of mention in this context:

Reed Hastings, long term Founder and ex-CEO and currently Chairman of the Board at Netflix, has gained fame in his reputation as a cultural leader. Amongst his most famous quotes:

  • “Lead with context, not control.” [5, Goodreads]
  • “Companies rarely die from moving too fast, and they frequently die from moving too slowly.” [SucceedFeed]
  • “At Netflix, we think you have to build a sense of responsibility where people care about the enterprise. Hard work… doesn’t matter as much to us. We care about great work.”
  • “You have to be big, fast, and flexible.” [Medium]
  • “Most entrepreneurial ideas will sound crazy, stupid and uneconomic, and then they’ll turn out to be right.” [SucceedFeed] 

What would be the behaviors in the organization that these messages might influence? 

Alex Karp, Palintir’s CEO, has made waves in the corporate world, both with his political stance, economic success, and openly anti-WOKE narrative.  These are some of his quotes. 

  • “Warrior Culture” & Performance: “We are dedicating the company to the service of the West and the USA,” [as discussed in this Reddit post]. Karp describes his company as “chaos in the best way,” where people move fast and focus on mission over hierarchy, [as mentioned on this Instagram reel].
  • Anti-Woke Meritocracy: Karp calls his company “completely anti-woke,” emphasizing that talent and results matter more than ideology. He has described Palantir’s values as “fighting for the right side of what should work in this country — meritocracy, lethal technology,” [as noted in this Business Insider article].
  • Value Creation vs. Status Quo: “We get paid on value creation. Everybody in this world is going to get paid on value creation,” as reported by Forbes.
  • Corporate Accountability: “Poor people are the only people who pay the price for being wrong in this culture,” Karp said, arguing that executives should absorb the risks of their decisions, [according to this Fox Business article].
  • Western Values & Tech: In a letter, he stated that “the rise of the West was not made possible ‘by the superiority of its ideas or values or religion… but rather by its superiority in applying organized violence,'” [as highlighted in this Facebook post]. 

Same question, what behaviors can be expected from the organization that is led with these goals and values?

It would take many more interviews to fact-check the actual impact of the messages from these leaders with regard to culture, core values, and behaviors. The point here is not to define what a good or bad culture is. The point is that employees are buying into these cultures, and the learned behaviors, how they impact the world, don’t stay at work. This has to be a conscious decision.

Early keen awareness of how external factors are impacting our business and professional growth applies. Awareness alone is insufficient. These conversations with the team need to happen now. Not decision-making meetings. Thought provoking discussions that inspire team members to have agency on the outcome, and build an emotional and intellectual individual investment that may result in a shared benefit between the employer and the employee. Organizations will benefit from the inclusion of diverse opinions regarding AI adoption. The topic must be demystified, so that the organization can recognize AI as a resource, or a challenge for the human in each role.

It can’t be left at ‘talk’. The outcome has to be acted upon and with the sense of urgency that applies when the environment is moving as fast as this is moving. Analysis paralysis could kick in, because it’s a complex and complicated challenge.  

Most leaders might choose to ask an AI agent about cultures, values and behaviors. The responses will be based on history, how these concepts have been interpreted in the past. The AI content is expansive, because the opinions and developments regarding cultures have been fully documented by humans. The leaders will still have to make a judgment call, and this is the right time to be innovative and creative, not to copy and paste from the past.

The Opportunity

“The transition to an A.I.-first world may be inevitable, but the path is still being paved with the heavy lifting of the very people being phased out.” Is this what the company wanted? As an example…

Moving too fast through the “transition” period is a mistake, because we are not operating in a business as usual mode.  It is a high expectation given that what’s ‘right’ is up for interpretation. Some executives care a lot, others don’t, but changes in leadership take time and full value alignment is challenging to achieve.

A lukewarm, comfortable enough culture won’t cut it, if companies want for employees to embrace the change. It must be good enough to retain all valuable talent, or companies risk losing what gave them the competitive advantage.

Employers that proactively decide to transform the culture, prevent unintended harm to humans, use AI to enhance performance, feed self-esteem and confidence in all, and build a greater business culture than previously conceived, will demonstrate the way to preserve and build a thriving human workforce. Employees can view this as an opportunity to emphasize the FORCE in the workforce, and insist on the changes that they wish to see enacted in the new environment.

This conversation with employees about AI-given transitions has to start now, along with dedicating adequate dollar budgets and time allocation for training and development, and exercising flexibility on what type of employment relationship makes sense for varied roles. Organizational development and headcount planning can be very different from how it has been done.

Companies and employees will benefit from better planning and more training. Companies that do this well and early will have an edge in the future. Employees that receive AI training, and adjust well to the new environment, will acquire more value and agency regarding their future.

These choices can result in economic success AND job satisfaction for all involved.


The Art and Science of Workforce Transition in the AI Era

All relationships end.

In the cleanest scenario, as ironic as it might be, the relationship ends with a death. Closure is irrefutable, and the grieving process is filled with humanity and compassion from others.

The loss of a job is much muddier, and we can hide behind the adage of ‘it’s not personal, it’s business.’ But what is more personal if not the loss of livelihood? It removes our ability to provide for our families; it brings the shame and embarrassment that comes from being the one let go, and not others.

When I quit a job and gave one month’s notice, as is typical for an executive role, and was walked out rudely and unnecessarily, I felt a tinge of what being fired or laid off might feel like. It felt like a stomach flu and regular flu at the same time, sadness, anger and confusion. What did I do wrong? The reflection came much later, and I came to understand that the employer’s reaction only reflected the quality of  the relationship we had while I was employed, or the lack thereof, and the trust that wasn’t at the level I thought. My bad. But also, theirs?

Separating amicably is both an art and a science.

The science part is about business economics and process management, and the art is about relating as human beings even while we are separating. A smooth parting of ways is possible, even under the worst of circumstances. This is where corporate culture, as experienced by the employees, makes a difference. Connecting as humans in the process has always been challenging, and labor laws give all corporations pause when it comes to messaging –what language should be used to explain ‘why’ the separation is happening. It’s complicated.

After sitting for decades at the table where these life altering decisions have been made, the experience hasn’t changed much. Leadership teams struggle with their decisions and positions depending on how they view people in the equation. Are we resources? Team members? Collaborators? Contributors? Commodities? Perhaps we saw and understood these attitudes and beliefs during the good times, but there is no other time when we experience the true colors more than when the ship is going down, or as we have seen recently, when the data shows that companies don’t need, or won’t need, the same number of people currently in their payroll. 

Separation as Strategy: How Thoughtful Exits Drive Human Leverage

Whether we call it a layoff, restructuring, efficiency measures, or whatever, the emotional impact on the individual losing their job is the same. They were rejected. Dumped. Hurt. In some cases, careers are negatively impacted. And the tendency, due to cultural values, is to believe that more money, or less money if you are paying, is the goal, the answer, to what will make the problem go away quickly. When the communication and the process are mismanaged by the company, not only can it be more costly, but it can destroy any semblance of trust and loyalty that might have built with the workforce. The employee carries this sentiment to the next job, starting it with suspicions about the new employer and their intentions.

‘Employees are our greatest asset’ seems like an antiquated or today, an insincere tag line many companies continue to include in their stated core values. When a company does not act on this value, all its stated values may bequestioned. Are customers first? Are Vendors partners? etc. These stated values might live on websites and employee handbooks, but in practice, does leadership live these values, especially when employees are told they are no longer employed, that effective today they are terminated? Apt messaging after a termination decision could and should reinforce these values, rather than contradict them. 

When very successful companies, based on market valuation, profitability and growth projections, lay off workers because efficiency, competition, and productivity can be achieved by AI,, this decision can be the most challenging to accept, the most painful, and the most confusing. In an M&A transaction, it is common to have redundancies in roles. It still hurts, but at some rational level this might be understood by the employee.

Handling these layoffs unskillfully has the real potential to damage the fabric of our society. To prevent this, we need skillful communication, with empathy, listening skills, patience, sympathy, negotiation skills, diligent follow up, and even love for our fellow human being can throughout the process. For an employee, separating from an employer involves a grieving process. To come through that process with one’s dignity intact requires support from the employer and/or from the wider community. 

As with the death of a loved one, nobody can jump to acceptance without going through the process of grieving. Moreover, nobody can do this alone. Employers have the responsibility, the duty, to support employees through this process. Those who fail to do so risk damaging the morale of the employees who have kept their jobs (this time). It can devastate the post-layoff company culture.  

Elevating Transitions: A Human First Framework for Separation.

The time to review the off-boarding process and how decisions are made and communicated is now.

Managing the Process:

  1. Timing. Although there is no good timing, companies can make it worse by having a layoff right before a holiday. When the reason for the layoff is financial – the company’s revenues have been declining, not profitable, running out of money – it makes a difference and payroll for others is at risk, if you have to pay for holiday time. But when the action is due to efficiency predicted (time will tell), why not pay for the holidays and make the effective date later? The amount of money is less relevant than the gesture.
    • Spend time with people. Listen to their feelings. This is time invested in the reputation for the future of the organization.
    • Do not rush. The emotional process varies by individual. Take the cues from those laid off.  Some will emote longer than others.
    • Never on a Friday or before the end of the month.
  2. Communication. Tell it like it is. Express empathy for those affected. Avoid making it sound like it’s a ‘good move’ for anyone. It is not. Save the positive vision for later, for those who remain employed. Reducing the communication to a mass email to all being laid off is not just insufficient, it’s ignoring the individual impact – that varies widely depending on personal circumstances. Chronic illnesses, college tuitions, mortgages, pregnancies and disabilities, for example, could all be dependent on one person’s employment. Yes, it is a lot more work on the part of the company, but the individual touch speaks to the character of the leadership and the ‘people’ culture invoked. 
  3. Organization. Have all needed documentation and information ready and easily accessible. Leading people to a website is convenient, but also transactional. There is upside to handing out a paper version if the layoff is done in person. Something easy to refer to. EDD Unemployment links, Cobra for benefits, summary of last paycheck, etc. Make this EASY for those leaving. For remote employees, offer Office Hours when they can ask questions.
  4. Outsource Support. IF the reason for the layoff has to do with AI efficiencies, conducting a workshop on AI resources available, help with resumes, a list of companies hiring, and how to process the change – use HR to lead and facilitate multiple sessions, individual and for groups, all help laid off individuals and differentiates great companies from others. Train laid off employees on how to use AI in their search. HR can organize support group opportunities, like coffee gatherings, to maintain the ‘connection’. This builds good will with retained employees as well.
  5. Exit. IF the company trusted the employee the previous week to not be violent, to not steal, to not want to hurt others in any way, to behave professionally and decently, why not trust them the day of the layoff? This is about measured risk taking, and if the leadership knows the team members and can predict their behavior, why not treat them like law abiding employees and spend time, one last time, to part amicably? 

In turn, employees should respect the company’s policies, including returning company property (laptops, cell phones, etc.). 

Food for Thought

In the recent article The Agent in the Backpack, we addressed how employees can determine their value in relation to how they can use AI. It is becoming increasingly more important and critical, to intentionally reflect on what we each bring to the table for a new employer, or when becoming self-employed. There is a new vocabulary and to engage in the new relationship, we have to speak the same language.   

 

 

Collaboration and Confidence: The Human Elements AI Can’t Replace

Summary

This section intends to address three specific aspects of the Human contribution to society and the environment in the future, in collaboration with Artificial Intelligence.

The three dimensions we’ll explore are: Knowledge, Reliability and Trust. 

Introduction

Why is this relevant right now? AI is changing how we learn, how we prepare young people for a career, and how those in mid-career adapt and leverage the technology that is available. Employers, companies doing business, are already benefiting from AI and the efficiency that can achieved, at time threatening portions or all of some jobs. 

Knowing what the Human Leverage is, how this defines our contribution and developing the skills identified, will increase our chances to grow and evolve at the same pace technology is becoming available. 

In leading AI technology development companies, more than half of the requirements for open jobs are unique to a human. IF we assume that leading AI companies utilize all its current capability, we can conclude that the traits required in employees to perform these jobs are the human leverage a person brings to this context.

But delineating with precision the Human Leverage and the AI Leverage, more importantly the relationship between both, is impossible at any given time. Both technology and humanity continue to evolve and change at great speed. Instead, articulating the relationship between Humans and AI, can be more constructive and shed light on what the future co-existence might look like. 

The optimistic view of the future, in front of a transformational technology, is to treat AI as a collaborative partner where there is a hierarchy determined by the Human in the partnership. By empowering the human in the determination of what is our value, leverage, and in what ways can AI/Agent add value to achieve the goals established, we can optimize the outcome to serve the common good.

KNOWLEDGE

Today, large corporations like NVidia, Google, Microsoft, Apple and others, spend billions to define the laws and ethical compass that would serve as guardrails to keep Humans safe from the harm AI, guided by Humans, can cause to billions of people. The Artificial Intelligence of today has the knowledge we have given them and are able to learn on their own.  

In the post-industrial world, time is equivalent to money — the less time spent on a task, the greater the efficiency, the faster production happens, and the sooner products and services reach the market. AI is accelerating this cycle dramatically, performing more tasks in less time and reshaping the value of human labor.

For example, to manufacture a car now takes about half of the time it did in 1970. A cell phone development today takes approximately one year, compared to two years or more in 1990. For manufacturing, the time saving is even more extreme. AI-driven robotics and predictive analytics now allow companies like Tesla and Toyota to produce vehicles with fewer assembly steps and higher precision. What once required days of manual calibration and inspection can now be done in minutes using computer vision and real-time quality control algorithms.

In consumer electronics, companies such as Apple and Samsung rely on AI-based simulations to test thousands of design variations before a physical prototype is even built, reducing product development cycles from years to months. In architecture and engineering, AI modeling tools can now generate hundreds of viable structural designs within hours — a process that once took entire teams weeks. In the media industry, generative tools compress post-production editing from months to days. Even in healthcare, AI-assisted drug discovery has shortened early-stage development from nearly five years to less than one, as demonstrated by the rapid design of mRNA vaccines.

These examples illustrate how AI amplifies human productivity, compressing the timeline between concept and completion. But they also highlight a deeper question: as AI accelerates progress, how do we ensure that speed does not outpace human wisdom?

Our knowledge begins to accumulate as soon as we are born. We learn who are our parents, when we feel hungry or tired. The colors, sounds and temperature become familiar and we recognize when these change, being inside or outside. Gaining knowledge is an endless process throughout our lives, and when formal education is introduced, the speed at which we learn depends on many factors including exposure to information and IQ. At first, we are spoon fed knowledge, the ABCs and basic math. Once we have a foundation, we learn to learn on our own and it is our self-motivation that inspires us to learn about specific aspects of life that are of more interest, ultimately defining careers, when we apply what we have learned.

Likewise, AI learns from us. We feed and build the intelligence with our input. As it evolves, the lessons accumulate and inform ‘new’ knowledge, similar to the learning process in humans. 

But, does this mean AI’s reasoning is equally flawed to that of a human?

RELIABILITY

Perhaps one of the most important human skills of the future, is the ability to ask questions, probe, when collaborating with an AI tool, to enable it to perform more accurate research or do a better analysis. Asking better questions improves the reliability of the answer. Otherwise, the reliability of the tool is questionable. We risk a hallucination from the AI, described by ChatGPT as:

“In the context of AI responses, a hallucination refers to when an AI system (like ChatGPT or another language model) produces information that sounds plausible but is false, misleading, or entirely fabricated.

In simpler terms — it’s when the AI “makes something up” while presenting it as fact.” For example, an AI might confidently cite a research paper or a historical quote that doesn’t actually exist, or invent a statistic that seems credible but has no real source. In 2023, several lawyers in the United States were sanctioned after submitting court briefs written with AI assistance that contained fabricated legal cases — a striking reminder of how convincing, yet unreliable, these hallucinations can be.

It is our critical thinking, applied to formulating the best question/probe, that will minimize the risk of a hallucination. It is possible that some day, AI will question it’s accuracy.

In the same way that to confidently ride as passengers in a self-driven car, we must believe the car is equally or more reliable, safe, than when we drive it, AI must prove to be at least as reliable as a Human would be, when processing a task on our behalf. Believing that AI can be perfect, never err, is as false a belief as believing we humans can be perfect. Though, can self-driven cars be more reliable than humans, make less mistakes, get into fewer accidents? We don’t have enough experience to know this yet. 

TRUST

How trusted are autonomous cars?

Although the actual data does not show that autonomous vehicles present a higher risk to the passengers, than those driven by Humans, it appears that the lack of familiarity, being something ‘new’ and experience, results in a perception that is distrusting of AI to drive a car on our behalf. 

Among humans, trust is built over time. It has cultural dimensions and is one of the most complex human emotions — one that is felt rather than reasoned. We often just know who or what we trust, and sometimes we cannot explain why; there may be no logic behind it. Psychological research supports this intuition: studies have shown that people form trust judgments within seconds of meeting someone, often based on subtle cues like tone of voice, facial expression, or posture. Cross-cultural studies add another layer — for example, societies that emphasize collectivism, such as Japan or South Korea, tend to build trust through long-term relationships and shared group identity, while more individualistic cultures, like the United States, often rely on competence and performance as foundations for trust. Neuroscience, too, points to the hormone oxytocin — sometimes called the ‘trust chemical’ — which influences how we bond and cooperate with others. These findings remind us that trust is not merely cognitive but deeply emotional and physiological, woven into our social fabric.

When trust is mutual and we ask a question, and they don’t know the answer, they will say ‘I don’t know’. AI tools do not respond indicating it doesn’t know the answer! This might lead us to trust AI more than we trust a person that we don’t know? Since the relationship is new, our response to answers might vary, depending on who we are.  This nuance complicates the relationship between a human and the AI tool. Those that have worked with a tool for a long time, programmers for example, might trust the tool more because they have more experience with the tool, they have taught it, and cross-referenced, tested the answers, made corrections. 

Isn’t this the same experience as we have with humans, developing trust, with the only exception being that the tool doesn’t say ‘I don’t know’?