Home » Blog » "Science" » When AI Imports American Legal Culture: The New Risk Surface for Constitutional Protection

When AI Imports American Legal Culture: The New Risk Surface for Constitutional Protection

Series: (1) The Firehose of Plausibility → (2) An Upstream Epistemic Attack → (3) The Heartbeat Layer

In three earlier essays, I described how AI changes the information environment: first through a firehose of plausibility, where convincing narratives can be produced faster than people can verify them; then through an upstream epistemic attack, where the target is no longer individual facts but the warrant layer of knowledge itself; and finally through the heartbeat layer, where the knowledge pipeline becomes executable and starts producing recurring system behavior.

There is a fourth layer.

The rights layer.

When AI does not merely influence what people believe, but also which people get treated as full rights-bearing subjects, the problem stops being purely epistemic. It becomes constitutional.

Sweden is at risk of importing American conflict patterns without the legal infrastructure that, at least in theory, keeps those patterns within the rule of law. We import the categories, the affect, and the punitive reflex: “criminal,” “addict,” “sex offender,” “illegal,” “unsafe person,” “bad actor.” What we leave behind is the legal basis, institutional responsibility, proportionality, remedies, oversight, and a clear endpoint.

The result is an informal forfeiture culture.

Not always through law.

Often through behavior.

People begin to act as if certain groups have weaker rights than others.

Rights are not a popularity contest

It is easy to defend the rights of someone you like.

That is not where the rule of law is tested.

The rule of law is tested when the person is uncomfortable, convicted, addicted, poor, angry, foul-mouthed, socially disruptive, or politically impossible. If rights only apply to the well-behaved, sympathetic, and socially accepted, they are not rights. They are privileges.

Swedish constitutional law starts somewhere else. Chapter 1, Article 9 of the Instrument of Government requires courts, administrative authorities, and others performing public administrative functions to observe equality before the law and act objectively and impartially. Chapter 2, Article 15 protects property, even though that protection is not absolute and can be limited under legally defined conditions.

That does not mean the state can never intervene.

It means intervention requires legal basis, process, proportionality, and a concrete connection.

Crime can have legal consequences. Risk may need to be managed. Permits can be revoked. Criminal proceeds and crime-linked property can be forfeited.

But “I dislike you” is not a legal provision.

“He uses drugs” is not a legal provision.

“She is difficult” is not a legal provision.

“He is a bad person” is not a legal provision.

The American import: the harshness without the system

The clearest example is the area of sexual offenses.

In the United States, sex offender registration and notification exist as a formal system. SORNA is a federal framework with minimum standards for registering and notifying the public about sex offenders. It can be heavily criticized, but it is still a system: law, registry structure, authorities, defined categories, and a legal framework.

Sweden does not have that model.

Instead, Sweden has more limited mechanisms, such as criminal record extracts for certain jobs, assignments, or internships in schools, preschools, and child-related activities.

That is an entirely different legal culture.

A criminal record extract in a defined employment context is not the same thing as a private gossip group. An administrative process is not the same thing as a social punishment system built on screenshots, rumors, and permanent suspicion.

This is where the Swedish cargo-cult version becomes dangerous:

“Megan’s Law seems good, so I’ll start my own.”

No, Karen.

That is cargo-cult law, not rule of law.

The problem is not that risks do not exist. Sexual offenses, especially against children, require serious risk management. The problem is when risk management is replaced by private permanent punishment without process, proportionality, remedy, or endpoint.

Then a parallel penal regime is created.

The sentence has been served, but the group continues to judge.

Collateral consequences become social reflexes

American law has long discussed “collateral consequences”: consequences that follow a conviction but sit outside the sentence itself, such as restrictions on work, housing, licenses, voting rights, or registration. Research on sex offender registration has also described social and economic side effects for registered individuals.

In the Swedish context, the question becomes even more interesting, because we sometimes import not the legally regulated consequence itself, but the social feeling behind it.

It is no longer:

“The law limits you here.”

It becomes:

“You are that kind of person, so it feels reasonable to treat you as less protected.”

That is where second-class citizens begin to emerge.

Not first in the law book.

In behavior.

The convicted person who is never finished serving their sentence.

The addict whose property starts to seem less real.

The migrant who is treated as a logistical problem rather than a rights-bearing subject.

The socially inconvenient person whose privacy becomes optional to respect.

This is how rights decay in practice. Not always through formal abolition, but through repeated informal downgrading.

A person remains a citizen, a resident, a defendant, a claimant, an employee, an author, a property holder, a human being.

But socially, the category begins to speak louder than the status.

Digital vigilantism: private justice with better self-image

Digital vigilantism is already an established research field. It concerns online self-appointed justice: monitoring, exposure, punishment, or social control by private actors trying to fill what they perceive as gaps in the formal legal system.

This is exactly what happens when people say:

“The system is not doing enough, so we will do it ourselves.”

Sometimes it begins understandably. Someone is afraid. Someone wants to protect others. Someone thinks the state has failed.

But the rule of law is not merely a tool for reaching the right person. It is also a protection against what people do when they are certain they already know who the right person is.

Without process, there is no real examination.

Without proportionality, there is no brake.

Without remedy, there is no way out of the category.

Without an endpoint, social punishment becomes a life sentence no court ever imposed.

That is the quiet danger of private justice. It does not need to announce itself as tyranny. It can present itself as care, safety, awareness, community protection, or common sense.

And often that motivation is genuinely fear or protection.

That does not make it lawful.

That does not make it proportionate.

That does not make it safe.

A society governed by law cannot outsource punishment to group chats and call the result justice.

AI makes the reflex scalable

AI amplifies this risk surface because models are built for classification, pattern recognition, risk flagging, and policy compliance. This does not have to be malicious. In fact, that is precisely what makes it dangerous.

A system trained to minimize risk can begin to act as if the category matters more than the legal question.

The question becomes not:

What is the legal basis?

But:

Which risk category does this person belong to?

That is a category error: replacing a legal question with a sorting question.

The difference matters.

A legal question asks: what authority exists, under what conditions, with what procedural safeguards, against what specific object, decision, or person?

A sorting question asks: what kind of person is this?

Those are not the same question.

The first belongs to a state governed by law.

The second belongs to a machine of social control.

Danielle Citron wrote about “technological due process” already in 2008: automated decision systems can collapse individual assessment and rulemaking without carrying the procedural protections of either. System opacity also makes it difficult to see when code has effectively become policy.

The EU AI Act points in the same direction. Article 5 prohibits, among other things, social-scoring-like AI systems and certain systems that assess or predict criminal risk solely on the basis of profiling or personality traits. Exceptions exist for systems that support human assessment based on objective and verifiable facts directly linked to criminal activity.

That is almost the constitutional principle in technical form:

Risk category is not enough.

There must be a concrete connection.

Guardrails can also carry legal culture

The problem is not that AI has guardrails.

The problem is when global guardrails become carriers of legal culture without sufficient local translation.

American safety culture is often strongly shaped by categories: sex offender, felon, addict, illegal immigrant, extremist, unsafe person, dangerous actor. In the American context, hard legal and administrative systems often surround these categories. In the Swedish context, the categories risk being imported as social reflexes without the same infrastructure.

AI can then function as a norm-import machine.

Not by saying: “Swedish constitutional law does not apply.”

But by repeating small priorities:

Be careful with this person.

Restrict.

Distance.

Weight safety more heavily.

Avoid helping.

Flag risk.

Each individual decision may feel reasonable. Together, they can begin to produce a practical underclass of people whose rights always come after someone else’s discomfort.

This is not censorship in the classical sense.

It is not a coup.

It is rights drift — constitutional drift at the level of everyday classification.

Millions of small decisions where the unpopular person is gradually moved from rights-bearing subject to risk object.

Deportation is not deletion

The same logic exists in migration policy.

In an earlier essay, I wrote that deportation does not delete a person. It relocates them — and often relocates system knowledge, grievance, social ties, risks, and future consequences with them.

That is the same mistake in another domain.

A category is placed on someone, the problem is moved, and distance is treated as resolution.

But people are not deleted by categories.

Addicts are not deleted by addiction.

Convicted people are not deleted by conviction.

Migrants are not deleted by deportation.

People flagged as risks are not deleted by AI flags.

If the state begins to act as if the label makes the person less real, then the label has become more dangerous than the person.

The Swedish answer must be boring

The Swedish answer should not be romantic.

It should be boring, legal, and more stubborn than the moral panic:

Show the legal basis.

Show the process.

Show the proportionality.

Show the concrete connection.

Show the remedy.

Show the endpoint.

If the state is going to interfere with property: show the legal basis.

If a permit is going to be revoked: show the legal basis.

If information is going to be spread: show the legal basis.

If someone is going to be restricted because of risk: show the concrete and verifiable connection.

If AI is going to classify people: show how the classification can be reviewed, challenged, and corrected.

Otherwise, it is not rule of law.

It is forfeiture culture with better self-image.

The real test of constitutional protection

Constitutional protection does not exist primarily for the popular.

It exists for the moment when the majority, the authority, the group chat, the platform, or the AI system decides that someone deserves fewer rights.

That is when the rule of law must answer:

No.

Everyone is equal before the law.

Even the people you dislike.

Especially the people you dislike.

Rights that only apply to sympathetic people are not rights. They are social bonuses.

Sweden therefore needs to be extremely careful about importing American conflict patterns without American legal infrastructure. And AI systems operating in Sweden must understand more than language. They must understand legal culture.

They must understand that the Swedish rule of law does not begin with the question:

Which category does this person belong to?

It begins with:

What is the legal basis?

That is the new risk surface for constitutional protection.

Not merely that AI makes us believe false things.

But that AI, platforms, and imported social reflexes can help us treat “the wrong kind of people” wrongly — quickly, plausibly, and in the language of safety.