Today on Technonomicon:
- Digital commons in the AI Age
- How to get a loan on your next Taco Bell order
- A look at the "Metroburb"
First time reading? Subscribe here!
AI Is Creating a Tragedy of the Internet
About a year ago in Technonomicon, I wrote about Digital Commons. Digital Commons are just digital places that no one really owns, maintained, or created by the community that uses them, or sometimes a commercial entity. Think Wikipedia, open-source software, crowdsourced data, or the internet itself! Wikipedia has a lot of information on Digital Commons if you want to read further.
When writing about this previously, I briefly talked about Digital Commons in the age of AI. I focused on trust, stating that because AI can generate large amounts of content (or consume a large amount of content) very easily, we need to be on the lookout for "AI slop"—low"-quality, AI-generated content being dumped into Digital Common spaces like Wikipedia. This would break users' trust:
As we approach the broad adoption of AI tools, it is hard to understand whether what you are reading or downloading has been generated by AI. Overall, this may be fine. Still, AI makes creating large swathes of text, code, and images very easy, saturating original content and making "organic data" (idk if this exists, but I'm using it as non-AI-generated data) harder to find. Again, this may not be an issue to some, but there is some unwritten form of trust a user puts in the "owner" of the content they are consuming. If what they are consuming is led to be X, but it's actually Y, that is a problem, and the trust will be broken. AI makes it easy for people to do this. Digital commons can help mitigate this and cultivate digital gardens of organic data that are vetted and curated to provide value to the community it is serving by design. - Technonomicon March 2024
Later in that piece, I explored the ethical dilemma of large corporations using open source software or crowdsourced data (digital commons) without any contribution back, whether that be time (contributing the company's developers to the open source software) or resources (contributing money or their infrastructure, like compute).
I had the opinion that when large corporations take advantage of Digital Commons by using it without contributing back, it creates a tragedy of the commons.
A tragedy of the commons is a concept that says if too many people enjoy unrestricted access to a resource, they will overuse it and destroy the common altogether. A very tangible example is climate change and resource scarcity on Earth. It really comes down to self-interest (in the economic sense) and whether more people (or businesses/corporations/organizations) act in an altruistic way or a selfish way.
Expanding on the tragedy of the commons and looking at today's AI landscape, we can see that the problem is worsening in an even more direct way.
Yes, AI-generated content is already pretty much everywhere, but now it's actually the AI companies themselves who are ruining things for everyone else.
Digital commons spaces (see: the entire internet) are being hammered by AI bots from the largest AI companies (Big AI?) so hard that they are essentially DDoSing websites, taking down critical infrastructure that digital commons rely on.
This wouldn't be that big of a deal if it didn't directly result in money and resources that the other side has to dish out.
Fellow Ghost creator LibreNews created a great collection of first-hand accounts of many open source system administrators dealing with AI bots. It is actually crazy how much internet traffic is now just AI bots crawling websites, ingesting as much data as they can get their digital fingers on.
Scraping the internet isn't new. There are even existing rules that the Internet Engineering Task Force (IETF), a standards organization for the internet, came up with to allow website operators to indicate whether they want their website to be scraped and indexed on a search engine. It's called a robots.txt file. You can even see Technonomicon's here!
As some websites do not want to be scraped or indexed, they can block certain scrapers, like Google's or Microsoft's Bing, via the robots.txt file. The scrapers are supposed to check that file before scraping the website and adhere to the rules within it, even when it explicitly says not to scrape. While there is no hard-coded rule that says you have to follow the directions in a robots.txt file, the system relies on voluntary compliance.
The problem? AI companies are not respecting them.
What a tragedy.
Just as we are experiencing a transition in the U.S. from neo-liberalism to mercantilism, with political figures breaking the norms of government and the current world order, we may be experiencing a similar regime change on the internet. Unfortunately, when something like this happens, a lot of change is involved—both good and bad.
The general user just cares about the easiest and most convenient way to do something or get something on the internet. Previously, search engines were the way. Ask your question or put in your query, and the search engine will serve the most relevant websites as a result, filtering their database of billions of websites, books, and other media.
Generative AI has made information accessible in the most convenient way. Simply talk how you would to a human, and it can get you the information and return it in a way personalized to you and your preferences.
But these AI companies are not following the usual norms and rules of the internet, breaking existing protocols and throwing digital commons into chaos, making everyone fend for themselves.
To prevent AI from creating a tragedy of the entire internet, we need compromise on their part to keep things from getting worse (worse being the free/open services no longer existing, like a government no longer being for the people).
Now, if you prefer not to rely on Big Tech's good will and want to take the libertarian route to fortifying your own digital frontier, there are some options.
There have been other ways that website operators have been dealing with AI bots. A tool called Anubis was created that requires proof of work to let you (or the bot) through, but sometimes comes at the cost of users being mistaken for an AI bot.
Cloudflare, who are experts at stopping bots (and DDoS attacks), created a tool called AI Labyrinth. It uses AI to fight AI by generating content to slow down, confuse, and waste the resources of AI bots and crawlers who do not respect "no crawl" directives on websites (like the robots.txt mentioned above).
It is unfortunate that it has come to this—a digital arms race where website owners must deploy increasingly sophisticated defenses against the very companies that claim to be making the internet more accessible. All because someone needs to make sure their AI girlfriend has the most cutting-edge data.
In Other News
- Wired is dropping paywalls for FOIA-based reporting, as it is more important than ever that people have access to transparent reporting about the government (Freedom of the Press Foundation)
- The beloved smartwatch Pebble is back, but it is harder and harder to build 3rd party hardware in the Apple Ecosystem, says Eric Migicovsky, the creator of Pebble. (Eric Migicovsky)
- Klarna partners with DoorDash to add buy now pay later to food delivery. Nothing like getting a loan for your $62 tacobell order. The best thing to come of this news, by far, are the memes. (CNBC)
- California Attourney General issues a consumer alert to those who used 23andMe. They are having financial troubles, which means when they go out of business, your DNA, will literally will be up for auction to the highest bidder. There are too many reasons to list as to why this is not good, but fortunate for California citizens, you can request your DNA data be deleted. (California Attorney General)
- China is testing experimental 'dogfighting' satellites in space. China is investing heavily in space. Roughly 53,000 of the estimated 70,000 low-earth orbit satellite launches over the next half decade are likely to be from China. (Business Insider)
Company Highlight
Bell Works is a company that created the "metroburb." Essentially a "downtown in a box." They buy old corporate campuses and buildings and turn them into mixed-use utopias within suburbia. Their first location, located in Holmdel, New Jersey, is the old Bell Labs complex, which may look familiar if you watch Severance.
This building was the filming location for the Lumon Headquarters in Severance. A perfect buidling for an evil megacorp once you remove all the gentrification and hipsters.

👍 Enjoy this newsletter?
✉️ Forward to a friend and let them know they can subscribe here
👁🗨 Share Technonomicon in your favorite communities or on social media
📩 Feel free to reply to this email with feedback, new ideas, interesting websites, or just to say hi

See you next week!
Discussion