My Framework
Cameron Harwick has a great write up of his macroscopic framework for thinking about the world. Not only is it insightful and well written, I agree with 90% of it.
Still, that 10% contains some major caveats. I’ll elaborate on our points of disagreement below. But please read his post first.
10. Social norms are generally not rationally justifiable
I disagree.
I think of norms as valid types of reasons that we give to or ask from interlocutors to justify behavior. That makes them inherently rational. When this point is missed, the tendency is to demand for there to be a “why” behind the norm, when the role of norms is to be the “why” behind the action.
Maybe this is what Cameron means when he writes that norms “must be accepted either tout court or on the basis of a mythology.” But you can see why, if norms are rational at their core, this phrasing is misleading.
I have written on this point in a post called Sacred and Profane Reasons. In short, I think the notion that desires, preferences, values, and norms are non-rational or even irrational is not only mistaken, but has perverse consequences. Namely, it makes us instrumentalize imperatives, leading to pareto-inferior social orders.
Nonetheless, I was disposed to this view for most of my thinking life, but after reading the exception book Following the Rules by Joseph Heath I now see that view as untenable. I’ve blogged many excerpts from that book, but a key one on this topic is available here.
I believe coming around to the rationality of norms has made my framework more coherent. To illustrate, consider Cameron’s three opening points:
- The universe is intelligible.
- The language faculty is the decisive difference between human and animal consciousness.
- The fact-value distinction is irreducible.
I fully agree with all of this. But moreover, I think these points, taken together, imply the rationality of norms—especially once norms are conceptualized as [cognitive] moves in a language game. As Cameron writes below point 3:
Perception is filtered and structured by pre-conscious judgements about the significance of various aspects. This judgement (“theory”) is not essentially different from value judgements which operate on the conscious level.
Thus if Cameron really views value judgments as non-rational, then he’s committed to all judgments being non-rational, which contradicts the intelligibility of the universe.
I have also written that calling an imperative or norm a “myth” (as Cameron does for liberal norms and natural rights) amounts to a category error. Assertions and imperatives stake very different types of validity claims. For example, I can assert the non-existence of God while still holding on to the imperative of ritual. Imperatives don’t carry an intrinsic epistemic burden.
The confusion arises because ethical vocabularies using words like “ought to” and “rights” transform imperatives into assertions. But this doesn’t change the fact that the concept of “rights” is at core about expressing certain imperatives. It simply lets us express imperatives in a more flexible, natural way,
In Theory and Practice Reconciled I went so far as to define progress as any process whereby our theoretical assertions come into alignment with our practical imperatives. In other words, progress equals cooperation without the assistance of pious fictions.
This brings us to point 4:
Variation and selection are necessary and sufficient to explain complex order.
Necessary, yes, but not sufficient. This one goes to the importance of language, and its role in normative / cultural reproduction. As communicative animals our societies are subject to much more directionality than can be explained by purely Darwinian types of selection. I came to this view from reading Joseph Heath, as well: The second and final chapters in Following the Rules; and his synopsis / defense of Habermas’ theory of discourse ethics.
Combining Cameron’s points 1 through 3, and amending points 10 and 4, we have basically arrived at the precepts of Hegel’s German Idealism. Or, as Robert Brandom prefers to call it, American Pragmatism.
Which brings me full circle to Cameron’s first point: The universe is intelligible. And yes, “on its face, this is a statement about the mind, not about the universe.“
In Defense of Status Competitions
As SpaceX successfully landed their 23 story tall Falcon-9 rocket in upright position, Jeff Bezos, the CEO Blue Origin (a rocket company which performed a superficially similar, but technically much less impressive feat days before), tweeted the following:
Congrats @SpaceX on landing Falcon’s suborbital booster stage. Welcome to the club!
— Jeff Bezos (@JeffBezos) December 22, 2015
Ouch! Within an instance, Bezos became the target of scorn for hundreds of fawning SpaceX and Elon Musk fans who derided Bezos’ “welcome to the club” comment as classless and back handed. Yet as my colleague Andrew noted at the time, “given that space exploration is mostly a billionaire dick measuring contest, petty squabbling is probably the best motivator we could ask for.”
I think this is exactly right, but I will go a big step further. “Dick measuring contests,” more generally known as status competitions, are often called “wasteful,” “zero-sum,” and “inefficient.” Yet even when those labels are technically accurate (and they often aren't—the private sector space race, for example, is clearly socially useful), another important truth can be simultaneously true: Status competitions are our main, if not only, source of meaning in the universe.
The Anxieties of Affluence
For all the wealth controlled by the three comma club, they turn out to be relatively poor when it comes to status goods. The reason relates to the inherent positionality of status. As in a game of King of the hill, moving up a rank necessarily means someone else must move down one, with the top-most players having the least to grab on to. Climbing from second-from-the-top to “King” is thus exponentially harder than moving from third to second, forth to third, and so on. And for whomever is King, with no one above to latch on to, the only way to truly secure one’s position against the penultimate scourge would be to invent a (proverbial) sky hook.
If not for this zero sum (at the psychosocial level) drama, what would drive Musk or Bezos to invest so heavily in their own (quite literal) sky hooks? Bezos tweet is at least evidence that Musk’s aeronautical successes have gotten under his skin—ahh, the anxieties of affluence. But all that means is one of the world’s most socially productive people has all the more reason to wake up in the morning.
In contrast, for a middle class and median IQ American to broadcast their status relative to their peers they can always buy a bigger house, drive a faster car, learn a new talent, travel to more exotic places, or give more to charity. That is, the space to broadcast ever greater social distinction is seemingly unbounded from the top. This was the nouveau riche mindset of Elon Musk circa 1999, when he bought (and later crashed) a million dollar McLaren F1. But today, as an ennuyé riche multi-billionaire, simply owning an awesome car is old-hat, cheap-talk, something any rich CEO can do. So now he builds and designs even better cars from first principles, incidentally spurring innovation as he literally pushes against the physical and technological boundaries of keepin’ up with the Bezos.
As the McClaren incident shows, for all his self-effacing talk about saving humanity from extinction even Musk is human, and in that humanity ultimately motivated by subterranean vanity. Bezos’ only sin was to let his vanity see the light. At least he punches up.
Darwin’s Wedge
Critics of the free market point to these sorts of positional arms races as the downfall of the neoclassical economists’ conception of efficiency. On the one (invisible) hand, competition and exchange can guide the butcher and baker to produce meat and bread for the common good. On the other hand, identical competitive forces can lead nations to the brink of nuclear war, marketing and political campaign budgets to balloon, and large SUVs to pollute the roads due to safety in relative size. That is, individual incentives need not be aligned to the collective good. (As I’ve argued before, classical liberals like Adam Smith understood this full well).
Richard Thaler influentially explained markets where individual and collective goals diverge in terms of what he calls Darwin’s Wedge (or what writer Jag Bhalla variously calls “dumb competition” and “spontaneous disorder”). The term comes from evolutionary biology, where wasteful arms races are ubiquitous. In the classic example, deer evolved large, cumbersome antlers because whenever a mutation made a buck’s rack marginally larger he was able to beat out and reproduce more than his sexual competitors, passing on the trait. But since what really matters is not the absolute size of the antlers, but their size relative to the local average, competition over the trait lead sexual selection to favor ever larger antlers up to the point where the marginal benefit of a bit larger antler equaled its marginal cost (i.e. until it was evolutionarily stable).
In economics MB=MC is the mark of optimality, but here it’s clear competition in some sense failed. Male deer must now go through life with awkward bone-branches extruding above their eyes, getting them caught on trees, and generally using caloric resources that might be better spent procreating. Had the ancestors of deer somehow colluded genetically to cap the size of antlers, or else to compete along some other, less handicapping marker of genetic fitness, the entire deer species would in some sense be made “better off” through greater numbers.
Optimally Boring
But alas, genes are selfish. As the famed selfish gene raconteur Richard Dawkins himself once wrote:
In a typical mature forest, the canopy can be thought of as an aerial meadow, just like a rolling grassland prairie, but raised on stilts. The canopy is gathering solar energy at much the same rate as a grassland prairie would. But a substantial proportion of the energy is ‘wasted’ by being fed straight into the stilts, which do nothing more useful than loft the ‘meadow’ high in the air, where it picks up exactly the same harvest of photons as it would – at far lower cost – if it were laid flat on the ground.
And this brings us face to face with the difference between a designed economy and an evolutionary economy. In a designed economy there would be no trees, or certainly no very tall trees: no forests, no canopy. Trees are a waste. Trees are extravagant. Tree trunks are standing monuments to futile competition – futile if we think in terms of a planned economy. But the natural economy is not planned. Individual plants compete with other plants, of the same and other species, and the result is that they grow taller and taller, far taller than any planner would recommend.
And how lucky we are that this is the case! I am grateful for hemlock forests, flamboyant peacock tails, and even moose, the silly looking cousin to deer. Were it not for the playing out of these so-called wasteful competitions, instead of a world of immense biodiversity and wonder, life on Earth would consist in a hyper-efficient photosynthesizing slime spread thinly across the globe.
Indeed, the self-defeating hunt for relative fitness, including social (and sexual) distinction, is responsible for bootstrapping literally every one of our perceptual and cognitive faculties, including our ability to appreciate aesthetics. If not for positional arms races around sexual selection, for instance, it is unfathomable that beauty would exist at all. All creativity, when not strictly for survival, is rooted (in the sense of ultimate causation) in status games. Even the fact that I’m writing this right now.
Beyond biology, the same story explains the artistic and cultural diversity created by market societies. While there are no doubt those who think the classical era represented a pinnacle of cultural achievement, a stationary point at which we should have made every effort to hold in perpetuity, this is nothing more than the golden age fallacy. Instead, the greatest classical musicians were only great because they superseded their predecessors and contemporaries by chasing the same ephemeral distinction as Elon Musk and the white-tailed deer, and as such were contributing to a self-defeating cultural churn that baked-in its own impermanence. This holds true today, as dozens of musical and artistic genres have been invented, grown steadily popular, and then “mainstream” and stale as their social cachet dries up.
Ironically, it is often those who are most critical of neoclassical economics that still seem wedded to its narrow and lifeless conception of optimality. Rather than moving beyond the Samuelsonian allocation paradigm to one based in creation, innovation and discovery, they thus double down on the dangerous illusion that positional status competitions can be easily muted or improved on by a central planner (the “design economy” referred to by Dawkins). While there’s obvious merit in blocking literal arms races, tweaking the tax deductibility of marketing expenses, and so on, I always worry whenever I read calls for a general luxury tax, or other excoriations of variability in the type and quality of consumables.
In the extreme, this thinking is what underlied the Marxist-Leninist ideology that transformed Mao’s China into a literal “Nation in Uniform.” A bit earlier in history it also motivated the Soviet government’s attempt and failure to make the luxury goods used by the petite bourgeoisie available to one and all. Rather than try to “eliminate” bourgeois values, in contrast, a capitalist society is healthy precisely because it enables a nation of rebels and the inequality that implies.
Resistance is Futile
One thing neoclassical economics did get right is non-satiation. Humans can never be fully satisfied: not with our mates, not with our station in life, nor with this final draft. However, this is not because we have neat, monotone preferences, but rather it’s because relative status has shaped every corner our psyche.
Buddhism rightly teaches that this dissatisfaction, called dukkha, pervades all of existence. As Buddha supposedly once said, “I have taught one thing and one thing only, dukkha and the cessation of dukkha.” But why? If resistance is futile, why not embrace it. Satisfaction is over-rated anyway. What person has ever achieved any kind of success or excellence without being tortured by anxiety, stress, or self-consciousness?
Of course Buddhists, like Stoics, would presumably question my definition of success. Maybe if we all meditated daily and simply learned to lower our expectations we’d learn to be satisfied with poverty. Yet we ran that experiment and we self-evidently were not.
Rather than be zen about our lack of zen, even Buddhist practices have ironically become (or was it not always?) their own dimension for pursuing social distinction. Don’t forget, Veblen’s magnum opus on status goods was called “The Theory of the Leisure Class,” and what could be a greater advertisement of belonging to the leisure class than the ability to sit absolutely idle for hours out of every day.
I don’t deny that meditation can be incredibly useful for reducing and controlling the stresses and anxieties of civilization. But if you’re a fan of meditation you should also not deny nor feel shame in the bourgeois half of your BoBo paradise. You are not above consumerism or hedonic treadmills. On the contrary, you are a leading light, an early adopter, an innovator in waste.
Otherwise, a monomaniacal focus on achieving nirvana (the state when all attachments and dukkha have melted away) simply becomes an agent-centric example of the social planner’s protoplasmic conception of optimality. At the same time, I recognize the futility in my own attempt to disillusion you, dear reader. As Mises wrote, human action is predicated on “the expectation that purposeful behavior has the power to remove or at least to alleviate felt uneasiness.” It just turns out that that expectation is as mistaken as it is incorrigible.
So meditate if you have to, but don’t be afraid to day dream a little, too. It may fill you with anxiety, and it definitely won’t make you happy, but later in life you just might find yourself building a spaceship to Mars.
This essay originally appeared on Sweet Talk
Intentional states, from Following the Rules.
The New Drone Registry Targets Consumers
The Department of Transportation just announced the creation of a national registry for drones. The rationale, according to public statements, is to aid in identifying owners and operators of errant drones, and close a “gap” in rule enforcement.
Left unmentioned was the fact that a drone registry already exists for commercial drones. While the FAA effectively prohibits commercial drone operations without specific approval, an exemption can be granted by petitioning through Section 333 of the FAA Modernization and Reform Act of 2012 (FMRA). To complete the petition, an operator is required by law toregister the drone as a “small unmanned aircraft,” send in the original Bill of Sale from the manufacturer, and complete an affidavit of ownership (among other things).
Since commercial drones are already required to register with a national database, the new registry is therefore aimed squarely at consumers, hobbyists, and enthusiasts — the group of drone users who, up until now, had enjoyed relative freedom from the DoT and FAA’s regulatory indiscretions.
That freedom has been no accident. The FMRA, the current law of the land, explicitly restricts the FAA’s ability to “promulgate any rule and regulation” over recreational drones. Transportation Secretary Anthony Foxx has thus had to cite the DoT’s broadly-defined “safety authority” when pressed on the registry’s legality.
Regulatory creep
Nevertheless, as Marc Scribner points out, the Administrative Procedure Act (APA) requires any new rule to solicit comments from the public and take them into consideration. This process often takes years, but even if a new rule were finalized tomorrow the act would still allow 60 days for affected parties to come into compliance. This is problematic for the DoT, which is scrambling to get the details of the registry in place by November 20th. The timing is crucial, since up to a million drones are expected to be sold in the US this holiday season alone.
The broadness of the DoT’s safety authority is clearly a recipe for serious regulatory creep, which is why these procedures are essential for providing at least a modicum of accountability. Otherwise, infinitely contorted interpretations of their mandate let the DoT become a de facto law maker, only minus all the messy business of democratic oversight.
But since the DoT and FAA are proceeding as if the time constraints don’t exist, it suggests they may be planning to evoke a “good cause” exemption to these important APA rules. As drone lawyer Jonathan Rupprecht explains:
[T]he APA allows the FAA to issue a direct final rule without any notice when the FAA has good cause. Good cause is when the rulemaking process is “impracticable, unnecessary, or contrary to the public interest.” … “This exception can be used when an urgent and unsafe condition exists that must be addressed quickly, and there is not enough time to carry out Notice and Comment procedures without compromising safety.”
The idea that unregistered recreational drones are an imminent threat to public safety is, in a word, laughable. Yet more importantly, it runs contrary to the DoT and FAA’s own behavior, as they have allowed recreational drones to fly unregistered for years without exercising safety authority, and therefore materially contributed to their own deadline pressure.
These legal questions are just the tip of the iceberg. Suppose the FAA somehow finds the legal authority for a drone registry in time for Black Friday. That still does not make the registry itself anymore feasible on a practical level. …
Smart Device Paranoia
The following was originally written for TechLiberation.com:
The idea that the world needs further dumbing down was really the last thing on my mind. Yet this is exactly what Jay Stanley argues for in a recent post on Free Future, the ACLU tech blog.
Specifically, Stanley is concerned by the proliferation of “smart devices,” from smart homes to smart watches, and the enigmatic algorithms that power them. Exhibit A: The Volkswagen “smart control devices” designed to deliberately mis-measure diesel emissions. Far from an isolated case, Stanley extrapolates the Volkswagen scandal into a parable about the dangers of smart devices more generally, and calls for the recognition of “the virtue of dumbness”:
When we flip a coin, its dumbness is crucial. It doesn’t know that the visiting team is the massive underdog, that the captain’s sister just died of cancer, and that the coach is at risk of losing his job. It’s the coin’s very dumbness that makes everyone turn to it as a decider. … But imagine the referee has replaced it with a computer programmed to perform a virtual coin flip. There’s a reason we recoil at that idea. If we were ever to trust a computer with such a task, it would only be after a thorough examination of the computer’s code, mainly to find out whether the computer’s decision is based on “knowledge” of some kind, or whether it is blind as it should be.
While recoiling is a bit melodramatic, it’s clear from this that “dumbness” is not even the key issue at stake. What Stanley is really concerned about is biasedness or partiality (what he dubs “neutrality anxiety”), which is not unique to “dumb” devices like coins, nor is the opacity. A physical coin can be biased, a programmed coin can be fair, and at first glance the fairness of a physical coin is not really anymore obvious.
Yet this is the argument Stanley uses to justify his proposed requirement that all smart device code be open to the public for scrutiny going forward. Based on a knee-jerk commitment to transparency, he gives zero weight to the social benefit of allowing software creators a level of trade secrecy, especially as a potential substitute to patent and copyright protections. This is all the more ironic, given that Volkswagen used existing copyright law to hide its own malfeasance.
More importantly, the idea that the only way to check a virtual coin is to look at the source code is a serious non-sequitur. After all, in-use testing was how Volkswagen was actually caught in the end. What matters, in other words, is how the coin behaves in large and varied samples. In either the virtual or physical case, the best and least intrusive way to check a coin is to simply do thousands of flips. But what takes hours with a dumb coin takes a fraction of a second with a virtual coin. So I know which I prefer.
An hour versus a second may seem like a trivial advantage, but as an object or problem becomes more complex the opacity and limitations of “dumb” things only grow. Tom Brady’s “dumb” football is a case in point. After deflategate, I have much more confidence in the unbiasedness of the virtual ball in Madden. And to eliminate any doubt, I can once again run simulations – a standard practice among video game designers. This is what allows balance to be achieved in complex, asymmetrical video game maps, for example, while American football is stuck with a rectangle and switching ends at half-time.
In other words, despite Stanley’s repeated assertion that smart devices inevitably sacrifice equity for ruthless efficiency (like a hypothetical traffic light that turns green when it detects surgeons and corporate VPs), embedding algorithms is a demonstrably useful tool for achieving equity in the face of complexity that mirrors the real world. Think, for instance, of the algorithms that draw congressional districts to eliminate gerrymandering.
Yet even if smart devices and algorithms can improve both efficiency and equity, nonetheless they require a dose of human intention and therein lies the danger. Or does it?
Imagine a person, running late for something crucial, sitting at a seemingly interminable red light getting tense and angry. Today he may rail at his bad luck and at the universe, but in the future he will feel he’s the victim of a mind—and of whatever political entities are responsible for the shape of that signal’s logic.
In this future world of omnipresent agency, Stanley essentially imagines a pandemic of paranoid schizophrenia, where conspiracies lurk in every corner, and strings of bad luck are interpreted as punishment by the puppet masters. But this seems to get things exactly backwards. Smart devices are useful precisely because they remove agency, both in terms our personal cognitive effort (like when the lights turn on as you enter a room), and in terms of discretionary influence over our lives.
In this respect, one of Stanley’s own examples directly contradicts his thesis. He points to
an award-winning image of a Gaza City funeral procession, which was challenged due to manual adjustments the photographer made to its tone. I suspect that if the adjustments had been made automatically by his camera (being today little more than a specialized computer), the photo would not have been questioned.
Exactly! The smart focus and light balance of a modern point and click camera not only makes us all better photographers, but it removes worry of unfair and manipulative human input. Afterall, before normal traffic lights was the traffic guard, who let drivers through at his or her discretion. The move to automated lights condensed that human agency to the point of initial creation, thus dramatically reducing the potential for abuse. If smart devices mean we can automatically detect an ambulance or adjust camera aperture, it’s precisely the same sort of improvement.
The fact is that a benign rationality is already replete in the world around us, embedded not just in our technology, but also in our laws and institutions. Externalizing intelligence into rules and structures is the stuff of civilization – what’s called “extended cognition”. In the words of philosopher Andy Clark:
Advanced cognition depends crucially on our ability to dissipate reasoning: to diffuse achieved knowledge and practical wisdom through complex social structures, and to reduce the loads on individual brains by locating those brains in complex webs of linguistic, social, political and institutional constraints.
And yet we go through life without constantly looking over our shoulders. This is because we have adapted to the point where we are happily ignorant of the intelligence surrounding us. The hiddenness is a feature, not a bug, as it allows our attention to move on to more pressing things.
Critics of new technology always fail to appreciate this adaptability of human beings, implicitly answering 21st century thought experiments with 20th century prejudices. The enduring lesson of extended cognition is that smart devices promise to make – not just our stuff – but us, as living creatures, in a very real way more intelligent, expanding our own capabilities rather than subordinating us to the whim of invisible others.
To that end, I can’t help be reminded of the tagline at TechLiberation.com: “The problem is not whether machines think, but whether men do.”
The Fall of Semiotics
This is the title of my favourite chapter in Joseph Heath’s, Following the Rules. It’s about so much more than the problems with semiotics. It’s the clearest articulation I’ve ever read of the problems with meaning as reference, and how combining pragmatism with a linguistic twist offers a strong alternative explanation of intentionality, normativity and meaning in general. Anyone interested in philosophy should read it. The excerpt is at the link above – the whole book is available here as a pdf.
Anonymous asked: I have an economics question for you: There's no physical representation for the majority of "money," and as such there is no physical representation for the majority of debt. What would happen, if anything, if we just abolished all existing debt - personal, corporate, political, national, etc. That is to say if all people or entities with debt were set to a baseline of zero, as opposed to having imaginary negative money?
One person’s debit is another person’s credit. If you suddenly wiped out all debt it would in effect be a negative wealth shock to creditors, and a partial transfer to debtors. I say partial because significant wealth would simply be destroyed since there are gains from trade and dynamic efficiencies created by being allowed to borrow.
Your preface about “most money being non-physical” and use of the term “negative money” are confusing and irrelevant. Debt isn’t negative money. Money is a medium of exchange and a unit of account. You could get a loan from a bank and convert it to cash, increasing your holdings of the medium of exchange. But you will owe that bank back eventually, so you have a liability, which is measured in terms of the medium of account. In other words, you have a negative asset which just happens to be measured in dollar terms.
Disrupting Bureaucracy — Plain Text
I have a new article available at ReadPlainText.com, all about how egovernment may be a stepping stone to greater private governance. An excerpt:
The dance of the modern bureaucrat is algorithmic performance art. Dutifully executing protocols, they moves pulpy packets of information between fleshy nodes, up and down interlocking hierarchies. Opaque and resistant to adaptation, they represent the lowest of the low hanging fruit in an era of disruptive innovation. Software may be eating the world, but some things are simply harder to digest.
Yet the time will come. In the early 1990s, falling transaction costs in the form of rapid innovations in information technology lead to a wave of corporate outsourcing and downsizing. This trend trickled up to shape public-sector reforms worldwide, from major acts of deregulation to increased subcontracting of government services (though still within a lumbering and easily corrupted analog medium).
It wasn’t until the early 2000s that the internet and spread of personal computers inspired a “first wave” of e-government initiatives, like early web portals for e-procurement, again following corporate’s lead. Today, software driven information systems and ad hoc contracting have become the private sector norm, automating significant administrative and logistical functionsin ways that increasingly use artificial intelligence (sometimes called “autonomics”) to adapt in real time.
Some IT vendors are even anticipating personnel reductions of 30 to 40 percent in certain areas, brought on by these advances. As a result, we seem to be entering the twilight of 20th century corporate bureaucracy. This raises the question: Is government next?
My view is yes, and that it will happen sooner than many realize. The next wave of e-government reforms promises to be a tsunami, opening the highest levels of the state apparatus to technological disruption. Here’s how it will happen.
Anonymous asked: Can you start a Google Spreadsheet listing all the books you've read (as many as you can remember, and all going forward), a couple sentences worth of summary, and whether or not you recommend? And then share the link publicly (maybe put it in your Twitter bio or somewhere on this blog)?
That sounds like a lot of work.
Sacred and Profane Reasons
The following is an excerpt from my recent post at Sweet Talk, Sacred and Profane Reasons:
“One way to think about the sacred and profane distinction is in terms of types of reasons. “Do not trespass on the Holy of Holies,” says the Elder, “for it is sacred.” As far as practical reasons go, sacredness suffices. It has to suffice. Otherwise practical rationality enters an infinite regress, leading to decision paralysis in lieu of an epistemic foundation that simply does not exist. Put differently, the whole point of a good reason is that it provides a stopping rule in the game of giving and asking for reasons.
In the context of decision making, sacred reasons are categorical. That is, we feel duty-bound to respect valid claims of sanctity, where validity is a function of its coherence within the body of reasons we already take for granted (i.e. are presupposed) as implicit in existing social practices. Conversely, violations of sacred objects or spaces is socially deviant, even blasphemous.
The sacred, it seems, is the byproduct of an imperative. When questioning the imperative “do not kill,” an appropriate and argument-ending reply is “because life is sacred.” The question naturally arises as to why we do not simply issue the imperative, full-stop?
Well, some do. In the Mayan language of Sakapultek norms of every kind are conveyed with the underlying imperative, their equivalent of “do” and “do not”, or indirectly through irony. As one would expect, this severely constrains the ways a norm can be expressed. For instance, they lack the ability to say “you ought to do x because y.” By having terms like “ought,” “right” and “wrong,” English speakers have a much easier time expressing and univeralizing imperatives across domains.
This view is an up-shot of taking Wittgenstein’s private language argument and meaning as use claim seriously. The alternative, meaning as reference, leads down a host of dead ends outside of the scope of this post. Suffice it to say, thinking of “rightness” outside the context of use (as in doings, social practices, ordeontic constraints over a choice set) leads to the search for a referent somewhere in the universe.
In other words, sacredness is not “out there” like some sort of metaphysical substance, but is rather a stand-in part of speech, a general predicate, that aides in the expression of certain types of imperatives. Rather than having to explicitly declare “do not kill __” in every discrete case, saying “killing is wrong” harnesses our existing competency with verbs and predicates to establish the general case.“
Stephen Hsu’s startling prediction
AI can be thought of as a search problem over an effectively infinite, high-dimensional landscape of possible programs. Nature solved this search problem by brute force, effectively performing a huge computation involving trillions of evolving agents of varying information processing capability in a complex environment (the Earth). It took billions of years to go from the first tiny DNA replicators toHomo Sapiens. What evolution accomplished required tremendous resources. While silicon-based technologies are increasingly capable of simulating a mammalian or even human brain, we have little idea of how to find the tiny subset of all possible programs running on this hardware that would exhibit intelligent behavior.
But there is hope. By 2050, there will be another rapidly evolving and advancing intelligence besides that of machines: our own. The cost to sequence a human genome has fallen below $1,000, and powerful methods have been developed to unravel the genetic architecture of complex traits such as human cognitive ability. Technologies already exist which allow genomic selection of embryos during in vitro fertilization—an embryo’s DNA can be sequenced from a single extracted cell. Recent advances such as CRISPR allow highly targeted editing of genomes, and will eventually find their uses in human reproduction.
It is easy to forget that the computer revolution was led by a handful of geniuses: individuals with truly unusual cognitive ability.
The potential for improved human intelligence is enormous. Cognitive ability is influenced by thousands of genetic loci, each of small effect. If all were simultaneously improved, it would be possible to achieve, very roughly, about 100 standard deviations of improvement, corresponding to an IQ of over 1,000. We can’t imagine what capabilities this level of intelligence represents, but we can be sure it is far beyond our own. Cognitive engineering, via direct edits to embryonic human DNA, will eventually produce individuals who are well beyond all historical figures in cognitive ability. By 2050, this process will likely have begun.
Will super intelligent humans emerge in tandem with AI? I don’t normally put much stock in the social predictions of theoretical physicists, but Stephen Hsu really knows what he’s talking about when it comes to AI and genomics. If you didn’t know, he’s a scientific adviser to the Beijing Genomics Institute, the biggest genome sequencer in the world. Read the whole thing. His personal blog is worth following, too.
Habermas took Rawls to task for being too Kantian. Philosophers don’t divine what is moral. Normativity arises endogenously through the interactions and discourses of real social actors. From Heath’s Rebooting Discourse Ethics.
According to Habermas, all traditionalism has become neotraditionalism. From Heath’s Rebooting Discourse Ethics.
This quote from Hayek’s The Fatal Conceit (1988) neatly encapsulates Habermas’ critical theory. So much so it made me jump out of my bed. Hayek argues norms elude rational justification, but their formation may be rationally reconstructed through interpretive history, and then amended piecemeal so as to accord with our other normative commitments. That he uses Habermas’ exact term of art strongly suggests a direct familiarity.