“Will to power”，“利维坦”，“进化论”已经把这种生物性的统治描绘得淋漓尽致了 — 人的一切活动都是在争取生存的权力。在过去很多年的艺术实践中，作者时常需要某些意识形态的包装，来取得观众的“费解”或是“对更高力量的想象”来取得一种精神上的权力优越。然后再通过这种权力优越来从观众那里获取物质和其他资源。
试想，艺术是艺术家冥思苦想构建出来的精神戏剧 - 它只权力的幻象，而不是权力本身。
<an attempt to collapse the structure>
There are many paradoxes in contemporary art. When you dissect these paradoxes one by one, in the end, you realize that the only constant is the biological nature. Whether it is a declaration of "resist" or "embrace" the biology, it is always the biological nature occupying the high ground.
"Will to Power", "Leviathan", "Evolutionary Theory" have vividly portrayed the dominance of the biology - all human activities are for the right to survive. In many years of art practice in the past, there often needed ideologically packaging to gain the audience's "perplexity" or "imagination of a higher power" to assert a hierarchical position in the art-creating-viewing hierarchy. Then through this power hierarchy, obtain materials and other resources from the audience.
I used to claim, let art return to its most original function—providing experience. But obviously, the "illusion of power" is also an experience, many people are addicted to and tirelessly pursue it, and often regard it as the only value in art.
What's more confusing is that this pursuit of power is deeply rooted in our biology, so that anyone who refuses to participate in this power game will surely fall from this power tree, and die a terrible death.
This is another meaning of "if one hears the truth in the morning, one may die in the evening," you will die not long after understanding the grand principle of the world. Of course, this is based on a premise: the person who understands this grand principle will have enough arrogance to reject it.
Imagine, understanding art is a mental drama constructed by the artist - it is just an illusion of power, not power itself.
When this behavior of constructing illusions is observed, this imagined power system has collapsed.
After it collapses, how should we face art?
Is it to ignore, reject, indulge, attain enlightenment, or death?
<an ethic report gone rogue
Over the past few months, AI-generated graphics have rapidly advanced from being of limited practical use to surpassing the abilities of human graphic artists. Graphic artists like myself have seen our skills and advantages overshadowed by this sudden diffusion of AI technology. A series of events occurred in quick succession: an influx of new participants; AI-generated content (AIGC) flooding professional sharing sites and social media; artists protesting against AIGC; and many artists losing their jobs.
One problem stood out to me: how should an artist cope with such rapid technological advancements that have existential implications?
The ethical issues surrounding this question can be divided into two categories: 1) the ethics of technological advancement, and 2) the ethics of my own practice.
The following thoughts arose in chronological order:
1. In the grand scheme of things, my reaction as an artist to this power dynamic seems futile. Compared to the relentless march of technology that threatens our livelihoods and invalidates the countless hours spent on training, this particular tech innovation seems unethical from this perspective.
2. Technology has democratized our jobs and training, delivering years of hard-earned knowledge to everyone's fingertips. This seems to have improved both the quantity and quality of graphical content for people to enjoy. From this perspective, such technology can be ethically justified.
3. Is an increase in graphic content beneficial for human life and health? This is a complex issue.
4. Is something beneficial for human life and health inherently moral? What if there is a broader narrative to this universe that transcends human perspectives? If our wellbeing is causing harm to other conscious beings in this world, should we advocate for their rights as well?
5. What if short-term harm promotes long-term survival? What if harm to a small group is considered beneficial for a larger group? If sacrificing a hundred humans to aliens could save the entire planet, would you make that sacrifice? If so, who would you sacrifice? What does your choice reveal about your ideology, values, and the underlying biological constructs that guide your decisions?
6. What if you must choose between two options: one that makes you seem immoral but saves many people from harm, or one that appears moral but causes more harm? If you had to choose between murdering a person or letting that person press the nuclear button, which would you choose? If you had to choose between publicly harming a baby and in return eradicating all such harm, or not harming the baby and allowing such harm to persist indefinitely (infinite amount of babies will be harmed), which would you choose? The last two questions are structurally similar, but what influences your different choices? One could be perceived as heroic and the other as perverse. What does this reveal about morality as a signal, morality as a concept, and morality as a biological construct? Is morality truly about caring for others, or is it merely another survival mechanism?
7. How can we predict long-term consequences? Even if we can predict long-term consequences to a degree, what if the long-term effects of various factors intermingle, leading to an unpredictable outcome? What's worse, we might overlook this unpredictability and believe we've made a correct moral judgment.
8. If there's a conflict between short-term and long-term consequences, to what extent should we allow the benefits of one to override the harms of the other? For example, nuclear bombs are deadly and can cause immense harm, but they also serve as a deterrent, making international conflicts significantly more civilized and reducing bloodshed. If Ukraine had nuclear weapons, would Russia act differently?
9. To what extent does our ability to foresee long-term consequences impact our decisions?
My educated guess is that if there is sufficient evidence, support, and legitimate reasoning to argue that my research is ethically sound, then it should suffice for an ethics report. But what if I can justify the invention of a nuclear bomb? Does that make it ethical? If there were an ethics committee to approve the invention of a nuclear bomb, it certainly would not have been approved, but in reality, it turned out to be a vital instrument for peace. So, what gives? Or would the ethics committee have approved it if it was well reasoned to be ethically sound?
I don't believe an act can be definitively classified as ethical or unethical. An act's ethical status can be determined relatively, according to the specific context and scope of judgment, which are easily manoeuvred (rendering the ethic judgement boarding zero validity). Hence the act of justifying wouldn't align with my definition of an ethical stance.
I believe a relatively contextually-stable ethical thing to do is to let the public and time be the judge. Can such a stance be articulated without impairing my ability to pass an ethics evaluation? Or is the ethics report really an argument for the ethical justification of my research from my perspective (without necessarily needing to obtain a definitive answer)?