Our Workless Future

Artificial intelligence
Sunday, 11th June 2017

Could the universal basic income usher in an age of hyper-dependence, hyper-surveillance and a growing divide between technocratic elites and mainstream humanity?

Two of the most influential business leaders in the tech industry have thrown their weight behind the hitherto fanciful basic income, a cause until recently championed only by idealistic greens not known for their economic competence. Facebook Founder and CEO, Mark Zuckerberg, and SpaceX CEO and robotics evangelist Elon Musk both openly support the concept. These are of course among the same tech billionaires that our more traditional leftwing politicians would love to tax to fund their welfare and public spending initiatives.

To many basic income sounds too much like universal welfare for all and we really have to ask who would foot the bill? So let’s do some back-of-the-envelope calculations, shall we? Last year the UK government spent a whopping £780 billion. That works out at around £11,500 per person or £23,000 per worker, only 9% of whom are employed in manufacturing or agriculture. At current prices, it’s hard to live on less than £1000 a month once we include rent or mortgage repayments. A realistic basic income would thus be around £1000 per month for adults and probably £500 per month for children under 16. That’s a phenomenal sum of around £710 billion, virtually our entire public expenditure. Admittedly we’d save around £200 million on welfare, pensions and in-work benefits, which are quite considerable for low-paid workers (essentially anyone earning less than 24,000 per annum). Now, you may argue that we could adapt to a greener lower consumption model and make do with much lower basic incomes. But that doesn’t change the fundamental maths. If in the near future we let most working age adults rely on basic income, then to maintain social harmony we’d need to guarantee the kind of living standards to which we are accustomed. In all likelihood the authorities will redefine basic income dependents no longer as unemployed but as work-free citizens, lifelong students or carers who contribute to society not through paid work but simply as responsible members of the community helping to raise the next generation or somehow involved in voluntary community projects or awareness raising campaigns.

Of course, the early basic income enthusiasts would have you believe that universal welfare would unleash a new era of creativity, enabling us all to pursue our personal artistic, literary or inventive passions. We could take time off not only to raise our children, but also to learn new skills, explore the world or participate in new intellectual endeavours. If we were all highly motivated academics, gifted artists or talented sportspeople or entertainers, I think it could all work out very well. The whole world would become a giant university campus. We may choose to work for a few years as a brain surgeon, psychiatrist, artificial intelligence programmer, robotics engineer, architect or social policy researcher, earning good money, and then take an extended sabbatical to investigate the meaning of life.

The trouble is most of us are not highly motivated academics and unless our livelihood depends on work, involving physical and/or mental effort, we are very likely to succumb to carefree leisure. Numerous studies have shown quite conclusively that unconditional welfare provision traps all but the best-motivated and most conscientious people in a decadent lifestyle of easy options and self-indulgence. It’s so easy to retreat into a lifestyle of virtual gaming, online video watching, junk food bingeing and stupefaction. Long-term welfare recipients are statistically much more likely to suffer from emotional distress (usually defined as mental illnesses), eating disorders and dysfunctional relationships. Worse still, these psychosocial maladies tend to get worse with each generation.

Welfare dependency controversy

Dr Adam Perkins, lecturer in the neurobiology of personality at King’s College London, rattled the politically correct neoliberal consensus in his book, The Welfare Trait, which showed rather conclusively how welfare dependence not only engenders helplessness, it affects our personality, which helps explain the rise of identity politics and growing emphasis on mental health as issue we must address. Perkins cites voluminous evidence to support his contention that habitual welfare claimants tend to be less conscientious and agreeable than those of us who have to work for a living. Far from building a more egalitarian society with greater social solidarity, worklessness fosters a narcissistic culture of entitlement, treating a growing section of the adult population as children in need of constant supervision by the minority who work. Not surprisingly, mainstream academia and social justice warriors have taken offence and gone to great lengths to challenge Dr Perkins’ hypothesis, claiming for example that his conclusions could lend support to eugenics. However, if you have actually read the book or listen carefully to couple of good presentations Dr Perkins has given on the subject, you’ll find his thesis emphasises psychosocial rather than genetic causes of personality traits. If laziness were largely an inherited trait, we would have to explain how it could have evolved before the expansion of the modern welfare state. In traditional societies lazy people would fail to procreate unless they inherited substantial wealth (even if the idle could mate, they would be unable to fend for their offspring). So laziness as a genetic trait could have only spread among the aristocratic classes. Most people alive today are descendants of hard workers. Our forebears had little choice.


However, some may argue that welfare stigmatises its dependents, while everyone, including those who choose to work for extra financial rewards, would be entitled to basic income removing any stigma. We would simply treat our basic income as a universal right, just like water or air, that modern 21st century technology can guarantee everyone. Bear in mind that the coming AI revolution will not only displace manual workers and machine operators, it will also automate most clerical jobs too. Machine learning is already smart enough to perform most tasks currently assigned to accountants, legal secretaries and marketing researchers. Any jobs with predictable results and a finite set of potential variables are ripe for computerisation. Indeed North American lawyers are already losing substantial business to online search engines. Why would you pay someone £100 an hour just to discover a legal loophole that you could have found through a few simple search queries and reading a few forum posts, just to sort chaff the from the wheat? Online legal advice, sometimes with modest fees, is already a reality. The harsh truth is soon there will be few high-paying jobs for even the most industrious adults within the low to medium IQ range and as time goes by so too will be minimum IQ threshold for lucrative professional roles. That doesn’t mean there will be no jobs for ordinary people in the medium IQ range, but such jobs will be non-essential and more concerned with persuasion and social control than providing any mission-critical services. Now you may think some service sector roles such as care workers, nurses, bar staff, hairdressers and prostitutes are ill-suited to robotisation as we still need an authentic human touch. The transition may be more gradual for these roles as AI software developers refine human behaviour emulators, but already Japanese sex workers are worried about competition from life-like sex robots.

We should have seen it coming?

Governments in much of the Western world have tried to persuade us their educational and social welfare policies serve to redress the imbalance between rich and poor and to give everyone irrespective of their wealth or social background equal opportunities to thrive. Unfortunately their policies have succeeded mainly in engendering greater dependency on social intervention rather than empowering ordinary workers to assume greater responsibility for the functioning of our complex society. In decades to come I suspect we will look back at the neoliberal hiatus between approximately 1980 and 2020 as the last attempt to make laissez-faire free-market economics work by incentivising people to take control of their lives. We can longer build our economy on the flawed assumption that workers can earn enough not just to buy the goods that big business sells, but to fund all the services and infrastructure we need. Economic growth in the UK now tends to mean higher retail sales and more property speculation. One seriously wonders how the business model of thrift stores works. These abound in rundown towns across the UK as Pound Stretcher, Poundland etc.. selling cheap end-of-life merchandise to a local community reliant on welfare and public sector jobs.

Behind the scenes the authorities have long been preparing for a future where most people need to undertake either intellectually challenging or physically demanding work, i.e. the kind of jobs we really need as distinct from non-jobs whose main purpose is occupational therapy. Our schools seem increasingly more interested in familiarising youngsters with new technology and instilling a new progressive set of social values rather than focussing on hard skills that we might need if we wanted to gain some degree of self-reliance. Mainstream schooling strives to produce socially normalised young consumers who worship both big brands and transnational institutions. Anyone who strays from this norm is likely to be labelled with one personality disorder or another. Students who show some degree of analytical intelligence are primed for low level managerial roles, who inevitably join a mushrooming bureaucracy of ideologically driven experts and researchers. Meanwhile the health and safety culture that has infiltrated so many aspects of our lives serves to transfer responsibility from families and independent adults to myriad agencies. It hardly takes a huge leap of imagination to foresee that in the near future these agencies will supplemented by artificial intelligence. However, this begs the question whether remote advisors have our best interests at heart. Your close relatives and best friends may well give you honest advice that helps you attain your primary goals in life. On the other hand social engineers are not so much interested in you as an autonomous human being but in the smooth functioning of a much larger and more complex society.

Collectivism for the Masses and Individualism for the Elites

Human creativity is both a prerequisite for technological and cultural progress and a hindrance to social harmony, as it relies on competition among individuals and tends to empower critical thinkers to the detriment of social conformists. As we begin to harness the power of artificial intelligence and versatile robots more and more, the managerial classes will want to restrict the independence of creative types and channel their talent to serve the interests of technocratic corporate elites. One phenomenon that has largely escaped the attention of social analysts is the huge growth in the recruitment industry. In many niche professions there are now more recruiters than talented specialists. A nominally free-market economy has created a reality where the development of a software application requires one real programmer, two user interface builders, two designers, three usability testers, one project manager, a business analyst, an information systems manager, three marketing executives and potentially two or three recruiters. In this endeavour only the programmer is mission-critical. Interface building and design could be mainly automated as can usability testing until final stage user acceptance testing stage. Recruiters serve not just to identify people with highly specialised skill-sets, but to ensure that such individuals never take full ownership of their creations, but only gain experience as well-paid loyal team workers who know their place. The more circumscribed our professional focus is the less we see of the bigger picture. All too often we dismiss evidence we experience in our every lives as mere flukes and side effects of social progress rather than integral parts of a new hierarchical technotopia.

Letting the genie out of the IQ bottle

As artificial intelligence evolves to undertake more low-level managerial and analytical roles, large businesses will only employ talented individuals with high IQs, rare artistic flairs or charismatic personalities. Freelancers will find it harder to compete in the world without machine-augmented intelligence . Yet since the end of World War Two, mainstream social scientists have preferred to suppress the significance of differential IQ scores among different sections of the population. While it may be politically incorrect to classify a large portion of humanity as intellectually inferior, tech giants only hire the best. They often have little trust in mainstream education and are fully aware that many universities reward conformity and comprehension rather than analytical thinking. As a contract Web application developer I’ve often had to take tests, but most tested analytical skills and problem solving more than specific knowledge of a given programming language or framework. If I want to learn the syntactical differences between Kotlin and Swift (just to mention 2 up-and-coming languages that have much in common), I can always search it online or just let my IDE (integrated development environment) do it for me. If you know one, you can easily learn the other, but if you have let to learn the difference between a mutable and an immutable object, you’re of little use to most employers.

Most people alive today, at least in countries with a modern education system, have internalised the notion that the Earth orbits the Sun. Many could recite a cursory explanation for this supposition, but only a few could arrive at such a conclusion from astronomical observations alone and even fewer would be prepared to risk social exclusion if they had to challenge orthodoxy to assert their hypothesis as Galileo Galilei famously had to do before his imprisonment and house arrest in 1633. Any intellectual task that has been successfully accomplished and meticulously explained over and over again through human input can ultimately be assigned to smart applications able to deal with complex logical processing.

Late neoliberalism (as I believe this era may be called later in the century) still rewards hard work and creativity and allows the most successful to enhance their physique and intellectual performance through cosmetic surgery, private medicine, private education, food supplements and exclusive neighbourhoods. The rich have always been the first to benefit from new technologies. When bio-engineering merges with nano-robitics and artificial intelligence, the affluent classes will effectively buy an evolutionary advantage over the rest of humanity by adopting machine-augmented intelligence. Future alpha and beta humans could gain instant insights into complex problems that previously would have required extensive experience and lengthy analysis. One section of humanity would be able to detect deception instantly and psychoanalyse unaided humans, while the workless classes would be mere guinea pigs in the elite’s social engineering experiments. The real danger is that the masses could be lulled into a false sense of security and just like many peasants in feudal times worshipped religions governed by an ecclesiastical hierarchy, the consumer classes of the future will worship the evangelisers and opinion leaders of our technotopia.

Who’s really in control ?

So let’s cut to the chase. The real flaw is the basic income concept is not that greedy capitalists want to force us to work for a living (which would only be to maximise profits), but that it would disempower most of the population. As mere welfare claimants we would have no bargaining power at all. Any freedoms we may retain would be at the discretion of the elite who still have meaningful jobs. Artificial intelligence and virtual reality could easily give the wider public the illusion of democratic control. As dependants it would no longer matter if we suffer from learning disabilities or mental health challenges, which are increasingly treated not so much as psychosocial problems or neurological deficits, but as divergent categories of people whose special needs must be accommodated. Currently an intellectual disability usually only applies to people with an IQ below 70. The US army refuses to hire people with an IQ below 85. Most semi-skilled jobs require an IQ range of 90 - 105. Most high-skill professions (doctors, engineers, scientific researchers etc.) require an IQ over 115. Beyond an IQ of 120 (approx. in the 90th percentile) fewer and fewer people can compete on natural analytical intelligence alone. By the time reliable and effective machine-augmented intelligence device become available to wealthy buyers, this subgroup of humanity could acquire genius status, setting it apart from mainstream humanity who by comparison would then have significant learning handicaps.

Is there a viable alternative that could protect us against technocrats ?

When the computer revolution first entered public consciousness in the late 1970s, many foresaw a 20 hour working week and early retirement. Quite the opposite has happened. Young professionals are now working longer hours to further their career and pay off debts while the age of retirement is rising progressively to 70 in the UK. While we should certainly welcome our longer life expectancy, we're clearly not sharing our collective workload very fairly. However, when left to market forces alone, employers prefer to hire fewer reliable highly skilled professionals working longer hours than to spread the workload and invest in training apprentices who have not yet acquired the same expertise. It may be more expedient for future employers only to hire workers with an IQ over 120 while bankrolling consumer welfare and sophisticated social engineering programmes, but is it fairer? Should mainstream humanity, i.e. people within normal IQ range, not contribute to the organisation of their society by being intimately involved in the development of the technology that makes their lives possible? I know 1 experienced programmer, with the right productivity tools, can outperform a large team of novice programmers. Indeed I'd go further. Most novice programmers write naive routines that if deployed in a production environment could be very hard to maintain, but if you don't start with simple scripts you will never progress to more advanced concepts. By the same logic we could argue that learning arithmetic at school is redundant because calculators can do it faster. This is true, but if you rely solely on calculators, how do you know if their output is correct? What matters is not simply performing a cerebral task, but actually understanding what's going on. Let's take that a step further. If we rely on search engines and fact-checkers to find out the truth about our government and business leaders, how can we verify the objectivity and completeness of the selective information they provide ? How do we know which facts they have suppressed ? Indeed some may wonder what the purpose of life is if we are denied the chance to exercise our free will and critically explore the real world around us. If we are kept in a state of artificial contentment, then nobody will be motivated to change the system, which may well malfunction for reasons beyond the comprehension of most commoners. The more people that are involved in the research and development process, the harder it will be for a superclass of humans to pull the wool over our eyes. If you care about personal freedom and democracy, it may make more sense to share a complex R&D project among 20 people with an average IQ than to let one genius have a monopoly over true understanding.