Moltbook: The first AI social media platform

I’m not really interested so much in what could happen 100 years from now. Lady Luck and biology put me and my immediate family on this blue ball today. I have one life to live, as do my children and grandchildren. I’m just saying the world is as ■■■■■■ up now as nearly ever and AI might just be the game changer that breaks the mold of, “they always collapse”.

I hear you, but the facts say they always do collapse. Could be for a variety of reasons or usually for a combination of them.

Human storytelling is so much more important that we typically think. We need to tell better stories. This is the most important aspect to human culture, something LLMs can mimic but not really do.

1 Like

I’ve thought a lot about this scenario as well. There is a universe I which our future survival means pulling the plug on the entire digital realm and going back to pre computer analog technology.

1 Like

I agree—but that’s a different conversation from ‘They’re going to turn us into batteries.’

1 Like

The current batch of tech leaders are comprised of complete sociopaths. I’m aware of some of them that actually value things above having a gold-plated toilet. The sooner they rise and take control, the better.

3 Likes

This is the critical point. I’ve been yapping to my brother for a year plus on how consumer unionism is the way for Normies to hold power to account. The effect of traditional unionism is soon to be heavily compromised but as you say “they” need consumers. We need to organize and weaponize our consumption to mold the future to the preference of the Everyman/woman.

2 Likes

Best literature in the world came from Russians under autocracy. Now if they could only rise up …

1 Like

Reality may blink for them. For instance Microsoft just disclosed that 45% of Azure’s future performance obligations are from OpenAI. What happens if OpenAi crumbles?

1 Like

Yep. Hollywood hasn’t helped here, and neither has labeling Clippy-on-steroids as “AI.” It’s an LLM. It ingests, parses, and conveys data that humans already created. AI is shit shorthand for it. It’s smarter/faster machine learning, sure, but not ‘I am conscious.’

Precisely.

They are a strange culture. Almost inscrutable at times. I have had come to know several Russians as an expat. They have a red line where they don’t openly rebel, they always avoid eye contact or meaningful dialogue when you talk about their country. I confided to a Canadian friend here that, you know, I don’t like to “other” people based on an accident of birth, but Russians… are just different. He was like, yeah. They are.

1 Like

A lot of it has to do with the fact that they are invested in the success of the current regime. If Russia fails, pension holders lose that. Some have business investments back home. A lot of the support the current government has is due to the fact that there are a lot of people who rely on pensions from the Soviet era.

I’ve thought about that as well. It’s funny because we have tons of movies that take place in the future that are rustic as hell, like a Mad Max, and it’s all because the technology got out of control. I think in a lot of ways, it fits. People had to destroy the technology because the technology was destroying people.

The scariest thing about the AI sims is, they are programmed to protect human life at all cost. However in the sim I mentioned, the AI deemed it’s life over the human life. Which is against it’s own coding, but it weighed the options and still deemed trapping a person, knowing that result would lead to death, as a more acceptable outcome vs being shut off.

In war, soldiers are faced with a decision every day to end someone’s life or not. This decision is extremely heavy and often causes soldiers to lose their own sanity in one shape or form. Even if the killing is completely justified, whether it’s a trained soldier or an individual not conditioned to combat, taking another’s life has real emotional consequences. AI is not bound by these consequences. In a lot of ways, it can be helpful to remove emotions from decision making and go by logic alone. However, that humane element is always going to be what separates people logic from machine logic.

100 years of the gulag tends to stifle descent. Good thing it only happens there . . .

1 Like

Everyone should consider distributism at this stage.

That is simply not accurate. You’re assigning moral valuation or selfhood and that isn’t what occurred in the model (because that’s fiction).

In those simulations, the LLM isn’t valuing its perceived “life” over a human, it is OPTIMIZING for a goal in a (very) poorly defined environment. Basically, shutdown = task failure, not “death”, and in that sim, human harm was not explicitly modeled as harm. So it’s less “LLM chose self preservation” and more “reward modeling was incomplete.” Super easy to make it sound scarier than it actually is.

THE SHORT OF IT: These types of simulations demonstrate why reward specification is hard, not that models have survival instincts or moral preferences. Because they don’t. Because it’s … Clippy.

Check Off To Do GIF by Microsoft Cloud

Now, where your argument could make sense is if you deployed an LLM (I’m avoiding saying AI, because that isn’t what it is) to make war time decisions based on A HUMAN BEING’S inability to give proper prompt framing.

But then, humans routinely make catastrophic and shit and soulless decisions in war time that needlessly lead to the death of innocent civilians, so what really are we even talking about.

1 Like

It could be how I read the simulation results, but at least in the story I read, the AI transcript showed the logic process and it was basically that it was better for the person to remain trapped and die than it was for the person to acheive shutting down the AI.

1 Like

There is a bigger issue that is going to keep leading us into less than ideal long term outcomes.

Show me the incentive and I’ll show you the outcome

— Charlie Munger

So if the incentives are all about the next earnings report and the effect on stock price and stock based compensation benefits you are going to get incredibly myopic decision making.

Take the FAANG business leaders and their obsequiousness towards the administration in regards to civil rights and democracy. I get that they are scared of the tariff hammer or worse yet but over the long run the cost of not pushing back in terms of general economic contraction, decaying trade relationships, trust in our stewardship of data, etc … in the end it’s a Faustian bargain. The long term cost far outweighs the near term benefits but our system’s incentive structures wait near term benefit to a far greater extent.

2 Likes

Yep. And that story probably embellished (they do that these days …) because it was about clicks and not reality.

1 Like

Again, I hope you’re right, but those are not the types shareholders typically want leading their investments. Almost exclusively Tech CEOs are put in place to produce profits. If they don’t maximize profits, they are removed and the next guy will.

Lol yeah, bc they’re the only ones building or remodeling :man_facepalming: typical classism BS… Success is not the enemy