The Challenges of AI Safety in our industry and society
- David Gutierrez
- Feb 25, 2025
- 4 min read
Here we go again. Humanity stands at a crossroads, where technological advancement and human nature converge. We must make decisions in the relative dark that will impact unknown numbers of people into an uncertain future. This is both an exciting time for human progress, but we must understand that these decisions must never be made in a vacuum of our own excitement or fears. However, how do we tackle the challenges of AI safety in our industry and society?

Power:
There are significant challenges to our power production and distribution grids to fulfill the demands needed to power the data centers housing AI platforms.
A data center leader providing infrastructure for AI services mentioned to me that they now have no other choice but to build their own long-term power generation infrastructure. He is including in his long-term plan the possibility of micro nuclear power production to meet the demands he is getting to power AI services.
Aiding the private sector by liberating and reconsidering nuclear power generation rules and laws should be a priority. Nuclear power is the most potent, consistent, and cost-effective source of carbon-free power generation today. And AI is hungry for power.
Chip Production:
Geopolitics: The majority of microprocessors are still being produced in Asia, with Taiwan, South Korea, and China being the leading nations. With the geopolitics currently threatening the global markets and their underlying logistics, the US must continue to gain, maintain, and secure local production of these vital components. US rivals may use their own foreign policies in that region as a means to destabilize the US’s access to microprocessors.
National Politics: The CHIPS Act is a great start, but it remains to be seen if it will be sufficient. Private investment in this field may be a better solution than government-backed solutions.
Education and Labor Shortages: Shortages of electrical engineering (EE) and computer science (CS) engineers in the US are making it hard for organizations like TSMC and Intel to find qualified workers to build fabs.
TSMC delays Arizona facility opening, citing a lack of specialized labor
Intel Addresses Semiconductor Workforce Shortage
Ethics, Safety, and Fears of AI
While we have an amazing amount of brilliant people, from ethicists to engineers, working on and thinking about the ethics and morals around AI, I see a few challenges that must be overcome for General AI to be successful in our industry and in our society.
What ethics do we teach AI? How do we best generate some type of unifying theory of ethics and ethical behavior for AI itself? This is a question that many organizations have been working to answer with different solutions and different results. The question that I have for the industry is, would this impose one set of ethics or ethical schema as the “default”? How do we account for divergent cultural, religious, and ethnic differences in human experience? Is there a unifying theory of ethics?
We have a fear that what we are building becomes a person. While you may think it's a fringe idea, there are many who are already thinking about how to define consciousness in light of potential General AI. And this essentially scares us.
We are scared of the T-800. We are rightly concerned about how AI systems could be used by states in war, leading to the post-apocalyptic sci-fi world of the terminator. A world where AI understands that peace can only be achieved if we remove the hurdles to peace.
We are scared of RoboCop. We are rightly concerned about the use of AI by our civil authorities. With governments increasing their covert and overt surveillance of their citizenry, what would be the impact of an AI-powered surveillance platform? What are the implications for our civil and human liberties? How will AI impact our policing and criminal justice systems?
We are scared to understand ourselves more. How does AI reflect how we do war, crime, and pollution? We fear how AI will demand we change.
Will we be cyborgs? Individuals like Elon Musk and organizations like Neuralink say that the best way to cohabitate with AI and any super-intelligence is to build tools within us to keep up with super-intelligence.
The great tech replacement theory. There is a great fear that as AI progresses, humans will have nothing to do. With that newfound leisure, we will have little to no meaning in our lives and will end up with mass misery.
New Humans. On the far side of that spectrum, we see ideas of convergence where our next evolutionary process will be technological instead of biological.
Conclusion:
The adoption of AI presents a multitude of challenges, ranging from political, technical, and infrastructural hurdles to ethical and societal concerns. Addressing these challenges requires a multifaceted approach, involving collaboration between governments, industries, and researchers. By proactively tackling these issues, we can harness the potential of AI to drive innovation and progress while mitigating its potential risks and ensuring a future that benefits all of humanity.
As we move forward, it's crucial to remember that the development and deployment of AI should not be a race, but rather a thoughtful and deliberate journey. The choices we make today will shape the world of tomorrow, and it is our responsibility to ensure that AI is used for good, not for ill.
As you think of these and many more issues, let's collaborate in thinking about solutions.


Comments