Issue #1 | March 15th, 2025


Today on Technonomicon:

  • We look at the current regulatory landscape of AI
  • Military drone swarms get an upgrade
  • And a startup creating modular "thinking" for AI

First time reading? Subscribe here!


Rules for Thee but not for Mee (thinks OpenAI)

OpenAI's Chief Global Affairs Officer (head of lobbying, in other words), Chris Lehane, met with Trump officials last week and had many meetings to talk AI policy and strategy. The White House put out a Request For Information (RFI) after Trump signed an executive order, revoking much of what the Biden Administration put in place regarding AI.

Lehane said the Trump administration is focused on strategy around U.S. economic competitiveness and national security and that "Our work stream is intersecting with where the administration is going."

This comes as AI discourse shifts away from safety concerns and toward "winning" the not-so-invisible AI race with China.

Lehane shared a memo (wishlist) with the White House on where they would like to see AI policy and strategy headed. It focuses on a few key topics shared below, taken from this Axios article (who saw it early):

  • Pre-emption of state AI laws
  • Balanced rules around what advanced AI technology can be exported abroad
  • Allowing AI to learn from copyrighted material
  • Infrastructure investments for AI growth
  • Government adoption of AI

Almost two years ago, OpenAI's CEO, Sam Altman, went in front of Congress, practically begging for the industry to be regulated. And at the time, there was bipartisan support for a new regulatory body to step in and regulate the industry.

Looking back two years, it was definitely too early to start regulating, let alone creating a whole regulatory body. New industries, especially new technologies, move quickly while the market sorts things out and companies learn what customers actually want. Regulation does slow down innovation, so there's an obvious balance that the government needs to achieve early on as innovation and changes occur much more frequently.

This raises the question: Why would Sam Altman ask for the government to regulate its own industry when he knew it would be bad for business?

There could be a few reasons:

  • They know it will be regulated anyway, and they want to get ahead of the narrative
  • They want to get on the government's good side and will share knowledge and give an insider's perspective on the industry
  • They want to make it harder for competitors to enter the market by creating regulatory and legal barriers early
  • They are actually okay with a little regulation, but they overstated certain risks to point regulators in the wrong direction
  • It is plausible deniability for criticism regarding them not wanting regulation
  • They actually drink the kool-aid they're making and truly think the government needs to step in and "help" them/the industry (save them from themselves?)

Really, it could be any of these, and most likely it is all of them. Sometimes, after drinking the kool-aid for so long, you start to believe it. The pattern I described is regulatory capture, which is a form of corruption that the current environment in the Trump White House is ripe for.

Smaller governments, revolving doors, increased corporate lobbying, uninformed government officials, and general chaos in the federal government create the perfect environment for the leading industry incumbent of an emerging technology to come in and solidify its spot, influencing and pointing regulators in the direction it deems important.

They've shifted the narrative from AI safety to national security and our technological arms race with China to better align themselves with Trump's priorities. Not to mention making a promise of investing $500 billion in the next 4 years in the U.S., a-la the Stargate Project, of which the money has yet to be secured and contracts have yet to be signed (even though the Oracle-operated data centers are well into construction).

How did Sam know Trump likes big things?

Now comes OpenAI's wishlist for the Trump White House. Most notably, they want exemptions and protections from state regulations, copyright law, and liability. Lehane argued that China has "unfettered access to data" and American companies have to either license the data or ensure it falls into fair use, which is their biggest argument.

While this is a big setback when comparing AI companies in the U.S. vs. China, ultimately we are comparing different economies, different rules, and different forms of government.

On the world stage, however, this doesn't matter, which brings us to the balance that is needed mentioned earlier, and it's the strongest point arguing for some of the exemptions. It just depends on if you think generative AI will change the world or not.

The fair use point is hard to argue for as it is generally subjective, and it would mean a lot more if OpenAI wasn't punching down (a $100 billion+ company should pay for what it uses (steals?)), as smaller creators usually don't benefit.

Pay to license content like everyone else? Maybe we should rethink the current copyright and DRM laws for everyone, and not just a "special" few.

And why should the generative AI industry get an exemption from state laws? I thought the Trump administration wanted to push everything to the states? Apparently, states' rights matter until your favorite tech billionaire needs a favor.

>Elon Musk as entered the chat

To me, this reeks of regulatory capture. The crypto industry is doing the same thing and has spent over $100 million to get pro-crypto representatives elected to sway regulation in their favor, and it's working.

If whoever has the most power or the most money gets to decide the fate of the very regulations that will be put upon them, what is even the point of a democracy?

Rules for thee but not for me, thinks OpenAI.


In Other News

  • After years of battling the Consumer Financial Protection Bureau, traditional banks now fear that dismantling the CFPB under Trump would ironically benefit tech giants like PayPal and Venmo, whose payment platforms could operate with fewer regulations, making competing with banks easier. (CNBC)
  • An insane look at what the future of video games could look like backed by AI and Next Gen Virtual Reality. (Pirate Wires)
  • Niantic, the creators of Pokemon Go, are selling their games division to Saudi Arabia-owned Scopely. This comes after Niantic announced they used Pokemon Go player data to train a geospatial AI model, which they spun out a company named Niantic Labs to work on further. (Reuters, 404Media)
  • Drone Swarms are becoming a reality, with a Kyiv-based start-up creating unifed controls for drones that makes controlling them like a video game. (IEEE Spectrum)

Startup Highlight

Today I am highlighting Maisa AI. Maisa AI is the startup behind the KPU, or Knowledge Processing Unit, which is a modular way to achieve AI "thinking." Think Deepseek R1 or OpenAI's o1, but you can apply it to any model. Some benefits include increased context, fewer halucinations, and more modularity in AI systems. I actually made a video on my YouTube channel Brady's Brain about them that you should check out if you want to learn more. Otherwise, give the KPU a try and check out Maisa AI.


👍 Enjoy this newsletter?

✉️ Forward to a friend and let them know they can subscribe here

👁‍🗨 Share Technonomicon in your favorite communities or on social media

📩 Feel free to reply to this email with feedback, new ideas, interesting websites, or just to say hi


I am soft-launching fuckdrm.com. Take a look and share some feedback!

See you next week!