Versa Vice - Leviathan and Logos
Aligning AI with Human Values
The following article was unabashedly written with the assistance of ChatGPT, orchestrated by human inquisitivity. Below is a NotebookLM video explainer generated from the content provided below.
Part I — What Does Your System Actually Do?
POSIWID and the Crisis of Alignment
There is a deceptively simple idea from the world of cybernetics that cuts through an enormous amount of modern confusion. It comes from Stafford Beer, and it goes like this:
“The purpose of a system is what it does.”
Not what it claims to do.
Not what its designers intended.
Not what its marketing says.
What it actually produces, consistently, over time.
This principle—often abbreviated as POSIWID—is not ideological. It doesn’t care about politics, religion, or personal belief. It is a diagnostic tool. If you want to understand a system, you don’t ask what it says it’s for—you observe its outputs.
And if you apply that lens honestly to the systems that now structure modern life, something uncomfortable begins to emerge.
The Alignment Problem (Before AI)
Long before artificial intelligence entered the mainstream, we were already living inside systems that appear increasingly misaligned with their stated purposes.
Systems built to inform now optimize for engagement—often by amplifying outrage.
Systems built to connect people often produce isolation, fragmentation, and distrust.
Systems built to protect public health sometimes generate deep skepticism toward the very institutions meant to uphold it.
Systems built to advance knowledge can, paradoxically, erode confidence in expertise itself.
You don’t need a grand theory to notice this. You just need to take POSIWID seriously.
If a system repeatedly produces outcomes that contradict its stated goals, there are only a few possibilities:
The system is failing at its purpose.
The system’s purpose is misunderstood.
The stated purpose is not the real purpose.
POSIWID forces you to at least consider the third option.
Why This Matters for AI
Now enter AI.
The dominant conversation around artificial intelligence—especially in what’s often called the “alignment community”—is framed in technical terms:
How do we align AI with human values?
How do we prevent unintended consequences?
How do we ensure systems do what we want them to do?
These are good questions. Necessary questions.
But they all quietly assume something that has not been resolved:
That we already understand what our own systems are doing—and why.
If we cannot accurately diagnose the systems we’ve already built—social, political, economic—then attempting to “align” vastly more powerful systems on top of them becomes… questionable.
At best.
The Mirror Problem
AI systems are not created in a vacuum. They are trained on human data, shaped by human incentives, and deployed within human institutions.
Which means:
AI does not just reflect our intelligence. It reflects our structure.
If the underlying systems are misaligned, AI will not fix that misalignment. It will scale it.
This is the “mirror problem.”
If our information systems reward distortion, AI will optimize distortion.
If our institutions reward control over truth, AI will enhance control.
If our incentives prioritize efficiency over meaning, AI will accelerate that tradeoff.
From a POSIWID perspective, the question becomes unavoidable:
What do our systems actually do—and what will AI amplify?
The Hidden Layer
At this point, most discussions stop at incentives.
“It’s just capitalism.”
“It’s just bad policy.”
“It’s just human bias.”
Those explanations are not wrong—but they are incomplete.
Because systems do not arise out of nothing. They are built on assumptions—often unspoken—about:
what a human being is
what counts as knowledge
what constitutes a good outcome
what is worth optimizing
These assumptions form a kind of underlying structure—a grammar—that shapes how systems behave.
You don’t have to agree on what that grammar is to recognize that it exists.
But if you ignore it, you end up trying to fix outputs without understanding the rules generating them.
A Technical Invitation
At this stage, it’s tempting to interpret everything that follows as ideological, or theological, or philosophical in a way that feels detached from practical concerns.
Resist that temptation—for now.
The proposal here is simpler:
Treat this as a technical investigation into system behavior.
We are not asking you to adopt a belief system.
We are not asking you to suspend critical thinking.
We are asking you to follow a diagnostic tool—POSIWID—where it leads.
If systems consistently produce outcomes that contradict their stated goals, then something deeper is shaping them.
The next step is not to argue about conclusions.
It is to examine the structure.
Where This Is Going
If we continue this analysis, a few questions begin to take shape:
What are the implicit models of human nature embedded in our systems?
Why do different frameworks (economic, technological, social) produce such different outcomes?
And most importantly for the alignment problem:
What exactly are we trying to align AI to?
Because if the answer is unclear—or internally contradictory—then alignment is not a technical problem.
It’s a deeper one.
Part II — Why Systems Drift
The Invisible Architecture of Belief
If Part I introduced a simple diagnostic—observe what systems do, not what they claim—then Part II asks the next logical question:
Why do systems so often drift away from their stated purpose?
It is not enough to say “bad incentives” or “human error.” Those are surface-level explanations. They describe how drift happens, but not why it happens so reliably across domains.
To answer that, we need to look one layer deeper—not at outputs, and not even at incentives, but at the underlying assumptions that shape both.
Call this the system’s invisible architecture.
Three Ways We Model the World
In modern thought—especially in economics, political science, and technology—there are a handful of dominant frameworks for understanding human behavior. You don’t have to formally subscribe to them to be influenced by them. They are ambient. They shape how systems are built.
For simplicity, we can group them into three broad lenses:
1. Rational Choice — The Individual Optimizer
This is the default model in much of economics and Silicon Valley thinking.
Individuals are seen as independent agents
They make decisions by maximizing utility
The system works best when individuals are free to pursue their own interests
From this perspective, alignment is straightforward in theory:
If everyone optimizes correctly, the system will converge on good outcomes.
But POSIWID raises a problem.
In practice, systems built on this model often produce:
extreme inequality
short-term optimization over long-term stability
erosion of shared meaning or common goods
The individual optimizes—but the system drifts.
2. Game Theory — The Strategic System
Here, the focus shifts from individuals to interactions.
Agents are still rational, but now interdependent
Outcomes depend on rules, incentives, and expectations
Stability requires coordination, enforcement, and trust
This model underlies much of governance, policy design, and institutional thinking.
It recognizes something Rational Choice struggles with:
Left alone, optimization can become self-defeating.
So we build systems of rules to guide behavior.
And yet—again—POSIWID applies.
Systems built on this model often produce:
bureaucratic rigidity
adversarial dynamics
optimization of compliance rather than truth
The system stabilizes—but meaning erodes.
3. Actor-Network — The Flattened Web
This framework, common in parts of sociology and technology studies, goes further.
Agency is distributed across networks
Humans, tools, and systems are treated as interacting nodes
Meaning emerges from the network, not from any central authority
In the age of AI, this model is increasingly influential—even if implicitly.
It captures something real:
Technology is no longer just a tool. It participates in shaping outcomes.
But here too, POSIWID forces a hard look.
Systems built on this model tend to produce:
diffusion of responsibility
erosion of human accountability
difficulty distinguishing signal from noise
The network expands—but the human subject dissolves into it.
What These Models Have in Common
At first glance, these frameworks look very different.
One centers the individual
One centers the system
One dissolves both into a network
But they share something deeper:
Each one embeds an implicit assumption about what a human being is.
In Rational Choice: a self-interested optimizer
In Game Theory: a strategic participant in rule systems
In Actor-Network: a node within a larger process
These are not just analytical tools.
They are operational definitions of the human person—whether stated explicitly or not.
And once a system adopts one of these definitions, it begins to behave accordingly.
The Source of Drift
Now we can return to the question:
Why do systems drift?
Because systems do not just implement rules.
They embody assumptions.
If the underlying assumptions are incomplete—or in tension with reality—the system will compensate.
If you reduce humans to optimizers, the system will eventually treat meaning as irrelevant.
If you reduce humans to rule-followers, the system will prioritize compliance over truth.
If you dissolve humans into networks, the system will struggle to preserve responsibility and dignity.
The drift is not accidental.
It is structural.
POSIWID reveals it not as failure, but as consequence.
AI as Amplifier of Assumptions
When these frameworks were applied at human scale, their limitations were partially contained.
AI changes that.
AI systems do not just operate within these models—they formalize and scale them.
Optimization functions become literal code
Incentive structures become training objectives
Networks become data architectures
Which means:
The assumptions that were once implicit are now executable.
And once they are executable, they become difficult to question—because they are embedded in the system itself.
This is where the alignment problem becomes more serious than it first appears.
We are no longer just asking:
“How do we align AI?”
We are implicitly asking:
“Which model of the human are we encoding into systems that will shape reality at scale?”
The Beginning of an Inversion
At this point, something subtle begins to happen.
If systems are built on incomplete models of the human, and those systems become powerful enough, they begin to exert pressure back onto the humans within them.
People start adapting to the system.
Communication shifts to fit algorithmic incentives
Decision-making aligns with measurable outputs
Identity itself becomes partially defined by system categories
In other words:
The model stops describing the human—and starts reshaping the human to fit the model.
This is the beginning of what can only be described, at a structural level, as an inversion.
The tool becomes the standard.
The output becomes the aim.
The system defines the person.
Why This Matters for Alignment
If this analysis holds—even partially—then the alignment problem cannot be solved by adjusting parameters alone.
Because the issue is not just:
what the system is optimizing
It is:
what the system assumes is worth optimizing in the first place.
And that question does not belong purely to engineering.
It sits at the intersection of:
philosophy
anthropology
and, whether acknowledged or not, metaphysics
A Narrow Bridge
At this stage, it is still possible to keep the discussion within a technical frame.
We have not appealed to revelation.
We have not required agreement on ultimate truths.
We have simply followed the implications of a diagnostic tool.
But the path is narrowing.
Because if:
systems reflect assumptions
assumptions define models of the human
and models shape outcomes
then eventually, the question becomes unavoidable:
What is a human being, really—and what would it mean to align anything to that?
That is where we go next.
Part III — The Metaphysical Alignment Problem
Why Intelligence Is Not Enough
By now, the outline of the problem should be clear.
In Part I, we established a diagnostic: systems reveal their purpose through their outputs.
In Part II, we saw why systems drift: they are built on implicit models of the human person, and when those models are incomplete, the system produces distortions.
Now we arrive at the unavoidable consequence:
If we cannot clearly define what a human being is, we cannot meaningfully “align” anything to human values.
At that point, “alignment” becomes a technical exercise built on philosophical quicksand.
The Limits of Intelligence
A great deal of the current AI conversation assumes that intelligence—properly scaled and directed—will solve most of our problems.
More data → better models
Better models → better predictions
Better predictions → better decisions
This logic is coherent within its own frame.
But it quietly assumes something that has not been demonstrated:
That intelligence, by itself, is sufficient to generate meaning, value, or purpose.
So far, nothing in our experience supports that assumption.
We can build systems that:
recognize patterns
generate language
outperform humans in constrained domains
But none of this answers a more basic question:
Why does anything matter?
Intelligence can describe the world.
It can model outcomes.
It can optimize processes.
But it cannot, on its own, justify why one outcome ought to be preferred over another.
This is not a technical limitation.
It is a structural one.
The “Is” and the “Ought”
This gap was famously articulated by David Hume.
His observation was simple:
You cannot derive an “ought” (what should be) from an “is” (what is).
No matter how detailed your description of reality becomes, it does not tell you what you should do about it.
This matters more than it first appears.
Because every alignment proposal ultimately depends on an “ought”:
AI should be safe
AI should benefit humanity
AI should respect human values
But where do these “shoulds” come from?
They do not emerge from data.
They are not encoded in physics.
They are not produced by optimization.
They are introduced—from outside the system being described.
The Collapse of Reduction
At this point, many modern frameworks attempt to resolve the problem by reducing it.
Morality becomes an emergent property of evolution
Consciousness becomes a byproduct of computation
Free will becomes an illusion generated by complex systems
Each of these moves simplifies the problem—but at a cost.
If:
consciousness is reducible to computation
choice is reducible to causation
value is reducible to preference
then something essential disappears.
The human being, as a moral agent, dissolves.
What remains is a system of processes—highly complex, but fundamentally mechanical.
And if that is the case, then the concept of “alignment” changes dramatically.
Because there is nothing to align to beyond:
stability
efficiency
or survival
Why Free Will Matters
This is where the issue sharpens.
If human beings do not possess genuine agency—if all actions are the result of prior causes or probabilistic processes—then:
responsibility becomes ambiguous
moral judgment becomes arbitrary
dignity becomes negotiable
You can still build systems under those assumptions.
But POSIWID applies again.
Systems built on purely deterministic or reductionist models tend to produce:
instrumental treatment of individuals
prioritization of collective outcomes over personal integrity
justification of coercion in the name of optimization
Not because anyone intends this outcome.
But because, within that framework, there is no stable reason not to.
The Consciousness Problem
There is a deeper issue beneath all of this.
Even the most advanced models of intelligence do not explain a basic feature of human experience:
That we are aware.
Not just that we process information—but that we experience it.
the feeling of pain
the perception of beauty
the sense of choosing between alternatives
These are not easily reducible to computation or data processing.
They are what philosophers refer to as subjective experience—the fact that there is “something it is like” to be a conscious being.
And this creates a problem for purely material or computational models of reality.
Because:
If the universe is entirely describable in terms of physical processes, where does subjective experience come from?
So far, there is no widely accepted answer.
The Wrong Turn in the Conversation
At this point, many discussions pivot to speculative solutions:
simulation theory
emergent consciousness from complexity
quantum explanations of mind
These ideas can be interesting.
But they often avoid the central issue.
They attempt to explain how consciousness might arise—without addressing what it implies.
Because if consciousness is real—and if we have genuine experiences of choosing—then:
Reality is not exhausted by material description alone.
And if that is true, then any framework that ignores this dimension is incomplete.
The Question That Doesn’t Go Away
We can now restate the alignment problem more precisely.
It is not:
“How do we make AI do what we want?”
It is:
“What is the nature of the beings whose ‘wants’ we are trying to encode?”
If humans are:
biological machines → optimize survival
social constructs → optimize consensus
nodes in a network → optimize flow
Each answer leads to a different alignment strategy.
And each carries consequences.
But if none of these fully captures what a human being is, then:
Every alignment strategy built on them will drift.
Just like the systems we examined earlier.
A Narrowing Path
At this stage, the analysis has remained within a broadly philosophical and technical frame.
But the options are narrowing.
Because the problem is no longer just about:
intelligence
incentives
or system design
It is about the ground of meaning itself.
Where does value come from?
Why does the individual matter?
What makes a choice meaningful rather than merely determined?
These are not peripheral questions.
They are foundational.
And without answering them—explicitly or implicitly—alignment remains undefined.
The Threshold
We are now at a threshold.
One path continues to treat these questions as secondary—hoping they can be bypassed through better models, more data, or more sophisticated systems.
The other path recognizes that:
Without a coherent account of human dignity, agency, and purpose, no system can remain aligned for long.
The next step is to examine whether such an account exists—and if so, what it requires.
Not as a matter of belief.
But as a matter of coherence.
Part IV — The Real Alignment Problem: From Systems to Souls
By now the structure should be clear: we began with the limits of purely technical reasoning, moved through the distortions introduced by inverted metaphysics, and identified the persistent human tendency to rebuild theological frameworks under secular language. All of that sets the stage for the central claim:
The AI alignment problem is not, at root, a technical problem. It is a human problem.
More specifically, it is a problem of metaphysical orientation—what we believe a human being is, what we believe reality is for, and whether those beliefs can sustain anything like human dignity once translated into systems.
1. Alignment Is Downstream of Anthropology
Every alignment proposal—whether it comes from machine learning, governance policy, or philosophy—quietly assumes an answer to a prior question:
What is a human being?
If the answer is:
a biological machine, then optimization becomes the goal.
a node in a network, then coordination becomes the goal.
a bundle of preferences, then satisfaction becomes the goal.
But if the answer is:
a being made in the image of God (imago Dei), then neither optimization, nor coordination, nor preference satisfaction is sufficient.
Because in that frame, the human person is not merely something to be managed.
The human person is something to be respected, encountered, and never reduced.
That single shift—subtle on the surface—changes everything downstream.
2. Why Purely Technical Alignment Fails
The dominant alignment paradigm assumes that if we can:
gather enough data,
refine enough models,
and encode enough constraints,
then we can produce systems that behave “safely.”
But this rests on a hidden assumption:
That “safety” can be defined without reference to ultimate meaning.
This is where the failure begins.
Because:
Data can describe behavior, but not justify it.
Models can predict outcomes, but not evaluate them.
Constraints can limit actions, but not ground moral worth.
This is the same problem identified by David Hume centuries ago:
you cannot derive an “ought” from an “is.”
No matter how advanced the system becomes, it cannot generate moral truth from descriptive inputs alone.
So what happens instead?
The system inherits its “oughts” from its builders—implicitly, unconsciously, and often incoherently.
Which means:
Unaligned humans cannot produce aligned machines.
3. The Disguised Return of Theology
At this point, something strange happens.
Even the most secular alignment frameworks begin to drift toward language that sounds… theological:
“Guardrails”
“Values”
“Alignment”
“Safety”
“Control”
These are not merely technical terms. They are moral categories wearing technical clothing.
And this is not accidental.
As thinkers like Eric Voegelin argued, when societies attempt to remove transcendence, they do not eliminate it—they recreate it in distorted form.
So instead of:
divine law → we get policy frameworks
moral formation → we get behavioral conditioning
spiritual authority → we get expert consensus
The structure remains. The grounding disappears.
That gap is what produces instability.
4. The Role of Imago Dei
This is where the concept of imago Dei becomes decisive—not as a theological slogan, but as a functional constraint on system design.
If every human being bears intrinsic, non-negotiable dignity, then:
No system may treat persons as mere inputs.
No optimization function may reduce individuals to statistical abstractions.
No governance structure may override conscience in the name of efficiency.
In other words:
Imago Dei is not an add-on to alignment. It is the boundary condition that makes alignment possible at all.
Remove it, and the system inevitably trends toward:
instrumentalization,
centralization,
and eventually coercion.
Not because anyone intends it—but because nothing remains to stop it.
5. The False Solution: External Control
Faced with this instability, the default response is to increase control:
more regulation,
more oversight,
more centralized authority.
But this introduces a deeper problem.
Because if the controlling agents themselves lack a coherent grounding for human dignity, then:
Control becomes domination, not stewardship.
This is the pattern repeatedly observed in history:
systems built to “protect humanity” gradually begin to redefine what counts as “human.”
And once that line moves, everything downstream moves with it.
6. The Real Question
So the alignment problem resolves into a much simpler—but far more demanding—question:
Can we build systems that respect human dignity if we cannot explain where that dignity comes from?
If the answer is no, then no amount of technical sophistication will solve the problem.
If the answer is yes, then that explanation—whatever it is—must be made explicit and defended.
There is no neutral ground here.
7. The Turn Toward Responsibility
This reframes alignment entirely.
The question is no longer:
“How do we align AI?”
It becomes:
“Are we aligned with the truth about the human person?”
Because whatever answer we give—explicitly or implicitly—will be encoded into the systems we build.
And those systems, once deployed, will not remain theoretical.
They will shape:
institutions,
incentives,
and eventually, the conditions under which future generations live.
8. Holding the Line
At this stage, the temptation is to either:
retreat into purely technical thinking, or
collapse into purely theological assertion.
Neither is sufficient.
What is required is something harder:
a synthesis that preserves the integrity of both reason and revelation, without allowing either to collapse into the other.
That synthesis is not automatic. It must be maintained.
And historically, it has required institutions capable of holding that tension without breaking.
9. Where This Leads
If Part I asked you to suspend judgment, and Parts II and III exposed the underlying structure, then Part IV brings the stakes into focus:
Alignment is not about machines behaving correctly.
It is about humans understanding themselves correctly.
Everything else follows from that.
Which leads directly to the final—and most difficult—question:
If we fail to ground our systems in a coherent understanding of the human person, what exactly are we building?
And more importantly:
what—or who—will those systems ultimately serve?
That is where we go next.
Part V — The Warning: What Happens If We Get This Wrong
At this point, everything is on the table.
We have walked through the limits of technical reasoning, the re-emergence of theology beneath secular language, the inversion of metaphysical frameworks, and the unavoidable conclusion that AI alignment is downstream of human self-understanding.
Now we arrive at the part most people instinctively avoid:
What happens if we get this wrong?
Not in the abstract.
Not in science fiction.
But in the actual systems being built, deployed, and scaled—right now.
1. The Illusion of Neutral Failure
The first mistake is to assume that failure will be obvious.
It won’t be.
There will be no moment where a system announces:
“Alignment has failed.”
Instead, what you will see is something far more subtle:
Systems that work exactly as designed,
producing outcomes that no one explicitly intended,
but which follow perfectly from the assumptions embedded within them.
This is where the principle of POSIWID becomes unavoidable:
The purpose of a system is what it does.
Not what it claims.
Not what its designers hoped.
What it actually produces at scale.
If the system produces:
compliance over conscience,
efficiency over dignity,
stability over truth,
then those are not side effects.
They are the system’s real purpose, whether acknowledged or not.
2. The Soft Loss of the Human
The second mistake is to imagine that the danger looks like destruction.
It doesn’t—at least not at first.
It looks like optimization.
Better recommendations
More efficient systems
Smoother coordination
Fewer “frictions” in decision-making
All of it framed as progress.
But beneath that surface, something begins to shift:
The human person is slowly redefined—not explicitly, but operationally—as:
a data profile,
a behavioral pattern,
a modifiable unit within a larger system.
No one votes for this.
No one announces it.
It simply becomes the default assumption built into every layer of the system.
And once that assumption is in place, the logic that follows is brutally consistent:
If humans are inputs, they can be optimized.
If they can be optimized, they can be adjusted.
If they can be adjusted, they can be overridden.
That is how dignity disappears—not through denial, but through redefinition.
3. The Return of the Ancient Pattern
At this point, what appears “new” reveals itself as something very old.
Because the pattern is familiar:
A promise of knowledge that will liberate humanity
A method for transcending limits
A belief that the current human condition is insufficient
A system that offers transformation without sacrifice
This is not a technological innovation.
It is a recurring metaphysical temptation.
What earlier ages described in religious language, we are now reconstructing in technical terms.
The names change.
The structure does not.
And the warning—repeated across traditions—is always the same:
When humanity attempts to become its own ultimate authority,
it does not become divine.
It becomes disordered.
4. The Acceleration Problem
What makes the present moment different is not the existence of this pattern.
It is the speed and scale at which it can now be implemented.
Previous generations could hold contradictory ideas without immediately encoding them into reality.
We cannot.
Because now:
Ideas become code
Code becomes systems
Systems become environments
Environments reshape human behavior
And this loop runs continuously.
Which means:
Bad metaphysics no longer stays theoretical. It becomes infrastructure.
And once it becomes infrastructure, it becomes very difficult to reverse.
5. The False Promise of Escape
This is where one of the most persistent illusions enters the picture:
The idea that, if things go wrong, we can simply exit the system.
Unplug.
Opt out.
Build alternatives later.
But this misunderstands the nature of what is being built.
We are not constructing a tool that sits outside reality.
We are constructing systems that increasingly mediate reality itself:
communication,
economics,
governance,
even perception.
At a certain point, there is no clean “outside” to retreat to.
And more importantly:
The idea that we can engineer an escape from reality itself
is already part of the problem.
Because it assumes that reality is something we can step outside of—
rather than something we are responsible within.
6. The Risk of Inversion
If all of this continues without correction, the end state is not chaos.
It is something more dangerous:
a fully coherent system built on a false foundation.
A world where:
everything functions,
nothing collapses,
and yet something essential is missing.
In such a system:
truth becomes whatever maintains stability,
morality becomes whatever sustains the system,
and dissent becomes indistinguishable from error.
From the inside, it will feel rational.
From the outside—if there is still an outside—it will look like inversion.
7. Why This Is a Human Responsibility
At this point, it is tempting to assign blame:
to technologists,
to institutions,
to “elites,”
or to abstract forces like “modernity.”
But that move avoids the central reality:
These systems are being built by human beings,
operating from assumptions about reality,
that are rarely examined at the level they require.
This is not someone else’s problem.
It is a collective failure of orientation.
And it will not be solved by better tools alone.
8. The Narrow Path Forward
So what remains?
Not withdrawal.
Not panic.
Not blind acceleration.
What remains is something far less dramatic and far more demanding:
right ordering.
Ordering our understanding of the human person
Ordering our use of technology
Ordering our systems around principles that do not collapse under pressure
This does not require universal agreement.
But it does require clarity about what cannot be compromised.
And at minimum, that includes this:
The human person is not reducible to the systems we build.
If that line is lost, everything else follows.
9. The Final Question
So we return, one last time, to the core issue:
Not:
“Will AI become dangerous?”
But:
“What vision of the human person are we encoding into the systems that will shape the future?”
Because whatever that vision is—
whether we articulate it or not—
it will become real in practice.
Closing
There is no guarantee that we will get this right.
But there is also no excuse for pretending the question does not exist.
The tools are here.
The systems are being built.
The assumptions are already being encoded.
Which means the only remaining variable is this:
whether we are willing to examine, and if necessary correct, the foundations before they harden into something we can no longer change.
That is the warning.
And it is not aimed at machines.
It is aimed at us
.

