Thomas Z. Ramsøy

The Future of AI: What 2024 Holds for Responsible and Ethical AI

| | | | | |

Artificial intelligence (AI) has been a hot topic for several years now, and it’s not hard to see why. While the popularity has been on a steady rise, 2023 is going to be seen as a true bifurcation moment.

In short, AI has gone from a promising and interesting tech that will soon come, to something we’re all scrambling to catch up to follow. It nicely follows the playbook of disruptive innovation, where we are now officially in the phase of disruption by AI.

I’ve been fortunate to work in the space of innovation — from helping startups in Silicon Valley to technology adoption in Fortune 500 companies, and to the different flavors of innovation in Europe. A lot of this was put into words with my good colleagues and friends Kyle Nel and Nathan Furr in our jointly published book “Leading Transformation.”

In the book, we explain how innovations tend to follow a pattern of disappointment until a crossover where the exponential development of technology overtakes our expectations. This point of disruption is where the innovation is on an exponential slope, while our human minds tend to be on a linear, step-by-step, trajectory. The gap between expectation and reality is only getting bigger.

We have now moved beyond this point of disruption, and we’re moving into the domain of magic…

The disruption curve of AI innovation

The Shift in AI: From Innovation to Responsibility and Ethics

Artificial Intelligence (AI) has already demonstrated an impressive potential that could fundamentally transform the way we live, work, and interact. This potential is not just theoretical – it is already having a substantial impact on a multitude of industries, from healthcare and finance to automotive and entertainment.

However, as AI continues to permeate into various aspects of our lives and its prevalence increases within our society, it becomes crucially important to consider the ethical implications of its use. We must ask ourselves: How can we ensure that these technologies are used for the betterment of society, rather than its detriment? How can we protect individual privacy and rights? How can we reduce biases in AI algorithms?

As we look forward to the year 2024, many are eagerly anticipating the next “wow” moment in AI. They are predicting what new, groundbreaking AI models and products we could see that would change our lives in ways we cannot even imagine right now. However, I have a different set of predictions. My predictions are not about the newest, most astounding technological advancements. Instead, they concern responsibility, ethics, and the human impact of AI.

So let’s take a step back from the hype and take a moment to reflect on what I believe will be some of the most impactful elements of AI in 2024. These elements might not be the most flashy or attention-grabbing, but they could significantly shape the way we live, work, and interact with AI, and most importantly, they go beyond the hype.

Responsible and Ethical AI

In 2024, we can expect to see a greater focus on responsible and ethical AI. According to a report by Gartner, “by 2024, 75% of organizations will shift from piloting to operationalizing AI, driving a 5X increase in streaming data and analytics infrastructures” 1. This means that companies will need to focus on where their data comes from, ensuring that AI models are not skewed or distorted and that they are transparent about how they acquire data.

Ongoing lawsuits such as the one between The New York Times and OpenAI on copyright infringement highlight the importance of responsible AI practices and how they will impact how AI models are trained 2.

It does not help, either, that it looks like OpenAI quietly has “quietly deleted language expressly prohibiting the use of its technology for military purposes from its usage policy.” In the same instance, we see that European Commission indicated that it may investigate OpenAI’s relationship with Microsoft.

And speaking of the European Union, just before the end of 2023 we saw the AI Act being released. I believe the full ramifications of this will only become obvious in the ensuing months, and that it will become a defacto new standard for AI cleanliness and standards both within and beyond the EU.

Not to forget, I also just recently showed that large language models (LLMs) not only replicate but even exacerbate (and even pervert…) human biases like the anchoring effect, and that it can distort certain human behaviors — some more obvious than others, and the less obvious ones are the errors we should fear the most.

What does all of this mean for our future? In my analysis, I am led to believe that the year 2024 will mark a significant turning point. We can anticipate witnessing a combined effort from public entities, legislative bodies, and corporate organizations. There will be a shared commitment towards advocating for a heightened degree of responsibility and ethics. This pushback will be more pronounced and widespread than what we’ve seen previously, affecting all sectors of society. It signifies a collective paradigm shift towards more ethical practices and responsible actions, which I believe will be an important hallmark of the year 2024.

Adopting AI Solutions

Companies will need to aggressively scale up how they adopt AI solutions. According to a survey conducted by Deloitte, an astonishing “83% of executives say their companies have either already implemented AI or are piloting the technology.”

This speaks volumes about the increased relevance of AI in the modern business landscape. However, the path to successful AI implementation is not without its challenges.

Many companies are currently grappling with the dilemma of which AI products to opt for, and how to effectively validate and test them.

Furthermore, there is the added challenge of staying abreast with the rapid pace of AI development. This situation suggests a pressing need for a role that mitigates these challenges. It hints at a requirement for AI companies to improve their demonstration of the “jobs to be done” – a theory coined by Clayton Christensen which posits that customers hire products to do specific jobs.

In essence, AI companies need to better articulate and showcase how their products or services can help companies achieve specific tasks or solve particular problems. This approach will aid companies in making informed decisions about which AI products to use and how to implement them effectively.

AI and Job Displacement

Another point you could raise is the impact of AI on job displacement. According to a report by McKinsey, “between 400 million and 800 million individuals could be displaced by automation and need to find new jobs by 2030 around the world.” This means that companies will need to be mindful of the impact that AI will have on their workforce, and will need to take steps to mitigate this impact.

AI’s impact on job displacement is likely to be felt in almost every industry. While some jobs may become obsolete, others will be created as new needs and technologies emerge. This shift could lead to a significant restructuring of job markets and even entire economies.

Companies will need to take proactive measures to manage this transition. This might include investing in training and re-skilling programs for their employees, creating new roles that take advantage of AI technologies, and working with policymakers to ensure that the benefits of AI are widely shared.

Furthermore, as AI becomes more prevalent, there will be a growing need for professionals who can understand, develop, and manage AI systems. This could lead to a surge in demand for skills such as data science, AI ethics, and machine learning.

Conclusion and Future Outlook

In conclusion, 2024 will be a year of significant change for AI. Companies will need to focus on responsible and ethical AI practices, ramp up their adoption of AI solutions, and be mindful of the impact that AI will have on their workforce.

By doing so, they can ensure that they are well-positioned to take advantage of the opportunities that AI presents.

The caution in our ongoing tale is that our minds are linear and the development is exponential. Will 2024 become a game of an ever-frantic game of catching up to the latest AI trends, regulations, and developments? Most likely, it will be chaotic, thrilling, and possibly a little scary…