Soapbox: How Apple Stunted Technology’s Progress

Image

Let’s look at the cycle of smartphone release. An iPhone is released approximately once per year. And every year there are people queuing up to get the latest release and, while I don’t feel the need for the latest tech, those people are entitled to do that. The problem is the underlying rhythm that does eventually force me to buy a new phone: the infrastructure is designed such that the software available (and sometimes essential) for the hardware is built to outpace it. Running the latest app updates or even the operating system leaves what was once my responsive little processor lagging behind.

Of course, I could just opt not to update my apps. But why should I have to suffer bugs and weakened security when the developers could equally make updates optimised for each piece of hardware that doesn’t leave me feeling left behind. Besides which, my mum (representative of a significant portion of the userbase) isn’t sufficiently informed to decide whether or not to update her apps – or even to know whether they update automatically.

To give another example, take my experiences with Google Chrome. When I first got on board it was leaps ahead of IE, but as time went on and it got updated (and my laptop didn’t) it literally became too heavy to run, leaving me with the same experiences I was having with IE when I orginally converted. Listening to a recent podcast it was mentioned that Chrome favours stability and security over resource demand, but is it too much to want all three?

These are both prime examples of capitalism and consumerism’s dark underbelly; pieces of tech that could function perfectly well even with new app development and updates, they are just intentionally not optimised for it.

But what about progress? Am I arguing against innovation? Absolutely not, I’m arguing for it. The system currently in place gives us a sense of false innovation: I’m buying a new phone, it must be better. I’m making progress! But by what metrics? Higher resolution, slightly faster processor, bigger/smaller screen size. I’m buying into a culture of incremental improvement that does not foster a culture of innovation and step-change.

I want to see brand new tech; new ideas and concepts; different ways to work and play. And by buying into this yearly crawl forwards we are actively discouraging the titans like Apple from offering something us fresh and better.

So what is the solution? The cycle is difficult to break, and a few individuals who think the same way I do aren’t going to be able to change it. For the forseeable future iPhones and Galaxies will have a significant day one take-up. Perhaps the software developers will make a better target. If just a few start making apps optimised for different grades of hardware, or with tweak-able resource demand setting, this will demonstrate that it is not impossible. Then, perhaps, the average user will come to expect more from their average app.

[Jake Harris]

Also, if anyone was interested in my post about popular science and its relationship with the public, you should check out this week’s Science Hour podcast from the BBC World Service. There is an excellent explanation of the difficulties involved by Peter Higgs himself.

 

Videogames: Toys, Time-Wasters or the Torch for Future Human Development? (Part Two): Entering the Era of Virtual Reality

This is the second part of an article about how the coming advent and proliferation of virtual reality will potentially change the way society works over the coming half-century. In this second part, I explain how VR could be the next innovation in videogames, and then explain how it will spread to non-game-related entertainment, and then finally other societal areas like education, communication, fitness, travel & tourism, and business.

***

In the first part of this story, I suggested that videogames and game culture have changed our society – probably much more than you’d realise. Gamification of everyday activities and experiences is everywhere. Do you jog? If you do, you might have an app on your smartphone that allows you to track your calorie burn, and provides useful stats on your routine such as speed per mile etc. This is all to encourage you to beat your high-score, as it were.

Many people are finding that by turning their focus from simply getting fit, to beating their past records, they’re actually become better joggers. This is just one example of how game culture is changing the lives of even self-confessed “non-gamers”. Now, though, a new technology is going to change the way we play, and this is going to impact everyday existence for everyone; that technology is VR.

  1. The Story of VR so Far

VR has started trending in science/tech circles over the past couple of years because of the Oculus Rift – a VR headset that is currently being developed by Oculus VR. However, VR has a much longer history than that.

From Failed Beginnings…

“I thought we were doing the most important thing humanity had ever encountered.” Those were the words of Jaron Lanier, scientist, futurist and musician, when he coined the phrase “virtual reality” in the 1980s. At the time, he was using it as marketing jargon to impress computer scientists and potential buyers for the virtualization tools he and his team at VPL Research were doing. The presentation was aimed at showcasing a new programming language, but the investors were more excited about the glove he was using – a glove that could allow the wearer to interact with a computer in a more natural way than a mouse and keyboard. Despite the intentions of Lanier, this sparked an explosion of R&D into VR technologies, some of which resulted in new research tools and design aids like Ford’s Immersive Vehicle Environment.

Virtual Boy

Despite the uptake of VR from some technical and scientific companies across the world, the VR hype died down in the mid to late 90s because it simply couldn’t be translated for use in the home. Nintendo – the Japanese game giant – attempted to do so in the form of the Virtual Boy, but it was criticized immensely for being unresponsive and incompetent. This was the story all over: VR is cool, but not if you don’t have the computational power to make it work, bro. Every millisecond the response time increases from the input to the display, the worse the experience gets (i.e. input lag), and that’s why you need extremely capable processors to run a VR system.

A Fresh Start from Inspired Minds

Then, in 2012, a Kickstarted project filed by company Oculus VR, headed by the now 21 year-old Palmer Luckey raised over $1 million in VC in two days.

Its target was $250,000.

Since then, the VC total has risen to $91 million, and has attracted a number of programming and technical luminaries to the project, including videogame programming pioneer John Carmack who’s responsible for such influential and paradigm-shifting works as Doom and Quake.

Oculus Rift

The OR is really a passion project funded by and for people who are videogame enthusiasts. One of the largest issues users had with older VR technologies in the past is that they caused nausea because of lengthy input-display response times. However, with 920 by 1,080 pixels per eye and the average PC processing power being > 3.0GHz and memory speeds of > 4GB RAM, the Oculus has the capacity to receive very little in the way of input lag. John Carmack is spending a lot of his time focusing on reducing the levels of input lag on the OR, and if you’d like to read more about his thoughts, you can read them on his blog.

With developer kits being sent out to game makers across the world, and the enthusiast press being behind the project, it was only a matter of time before it gained some serious support. When Facebook just recently bought Oculus VR for $2 Billion, it was clear that there’s a change in the air: big corporations are starting to listen again. The technology’s getting there, and the games are starting to be made.

Now, Sony has shown of their own VR tech and it’s almost certain that Microsoft are working on something of their own. So, VR, it seems is really taking off again. Now that we’ve established that VR is ready to do what its proponents want it to, let’s explore how the future of the technology could roll out.

  1. VR Ten Years from Now

The OR and whatever other VR headsets that companies are working on aren’t likely to be brought to market for consumers to experience in a mainstream way for at least three years. When it does, it’ll be so-called “hardcore” gamers that comprise the majority of the ownership. In part, this stems from the fact that it was hardcore gamers who funded the initial Kickstarter for the OR. But what will the experiences that people will be having with VR be like?

Initially, the experiences are all likely to be games, and all likely to fall under two categories:

The first will be technical showcases by large developers with a lot of resources. These will show off the power of the VR and the technology, helping to advertise it.

The second will be games from small indie devs who are taking the technology and producing weird and wonderful experiences for it.

Games using the OR

Most likely, it’ll be the second category that will excite gamers the most, as the indie market in games is bourgeoning, and gamers are seeking more and more different experience; the inclusion of VR into the indie mix has massive potential. In a couple more years, the technology will be expanded to incorporate more and more uses, primarily in other entertainment mediums. Imagine being able to talk to people around the world – not in virtual chatrooms, but in actual virtual space. Similarly, imagine being able to watch Wimbledon surrounded by other tennis-mad people, without having to pay the expensive cost of actually going. These immersive experiences aren’t really too far away.

  1. VR Fifty Years from Now

If VR gets as far becoming accepted across all entertainment mediums, it’ll probably be fair to say that it has a foothold on society, and that it’s gone further than the original vision set out for the Oculus and its competitors. Clearly this is what large corporations like Facebook have in mind for the technology; while Facebook doesn’t really have any expertise in videogames, it certainly knows about online marketing to wider audiences.

It’s possible that fifty years from now, VR has exploded across a number of spaces:

Education: with a wealthier global population, we’re set to see more and more people look for an education. The problem is, there isn’t the space to educate everyone. In the UK, schools are well over-subscribed and more and more people are going to university. This means that physical space is at a premium. VR could be used to ease the problem. Imagine that you still go to small group discussions and seminars, but imagine if your lectures – where there are 50 or 100 plus students – are all held in virtual space, freeing up room, and allowing you to record the lecture for revision purposes.

Communication: right now, if we’re not communicating face to face, we’re using social media to send text, voice, image or video to each other, or we’re using email or video streaming services such as Skype and Twitch to live stream ourselves to the world. Think what VR could add to the blend. Imagine being able to meet new people in virtual spaces (like a VR Second Life), or do virtual speed dating.

Cats in VR!

Fitness: With obesity levels around the world rising year-on-year, something needs to be done to get people to lead healthier lifestyles. One thing to note here is that those countries with the highest levels of obesity are, on average, those that have tertiary industry-driven economies (finance, marketing, advertising etc.). One thing that these jobs all have in common is that they’re desk jobs, and this means that the majority of the population is leading sedentary lifestyles sat in front of a screen for 8 hours a day (at least). Then these people go home and seek entertainment in front of their computer/tablet/phone screens. In other words, there’s an unfortunate positive correlation between increase in “screen time” and increase in weight. We’re also finding that people simply don’t want to tear themselves away from their digital lives to go to the gym or go outside and jog.

We need to face up to this truth and work with it; VR could help here. Instead of running outside, people could create virtual running tracks in fantasy landscapes (imagine how cool it would be to run through Skyrim!) and then share them with their friends to try to beat their times. This would allow people to work out in a way that’s acceptable to their lifestyles.

Travel & Tourism: Although many people go on holiday every year, a trip abroad is something that only the relatively wealthy can afford. That limits the poor’s exposure to foreign cultures and educational experiences. VR could therefore be used to broaden people’s worldly horisons. Designers to craft virtual versions of famous world landmarks, and model the world’s best cities a la Google Earth, allowing everyone with a headset to experience them for themselves. This can also be a useful educational tool, with school pupils virtually immersed in historical or political scenarios to help them understand the weight of the situation better.

Business: The western world is full of advertising and marketing companies. That’s why some

Woman staring through virtual window of her virtual office

would claim that London is such a success (according to Capitalist principles; not so much by humanitarian ones…). The past ten years has seen the focus of marketers move from paper to digital, and VR could open up – literally – an entire new world of marketing possibilities. If virtual tourism takes off, then imagine the proliferation of virtual billboards. Virtual marketing propaganda could be a way of getting into people’s subconscious while they’re cognitively susceptible.

So there we are. From humble beginnings, and with the right encouragement, VR might just become much more than an entertainment tool. In fact, we might find that VR isn’t something that’s simply facilitating virtual experiences, but that it’s doing much more than that: it’s changing the way humans experience reality. Next week, in the final installment of this series, I’ll discuss how VR might actually result in a re-shaping of human cognition.

[Tom Rhodes]

Knowing-How and Knowing-That

KnowledgeSS-Post

“What is knowledge?” is a huge question. It usually focuses on a specific kind of knowledge called ‘propositional knowledge’, or ‘knowledge-that’. What is propositional knowledge? Do you know the name of the capital city of France? If you do, that’s propositional knowledge. It is knowledge that “the capital of France is Paris”. As a rule, if you have knowledge that x, then x is propositional knowledge.

There is another kind of knowledge we all (seriously, no puns are intended in the post) know simply as ‘Know how’. This is a term equally as technical as the more clinical sounding ‘propositional knowledge’, but helpfully, the entity philosophers are trying to refer to with the colloquial-sounding name of ‘know-how’ corresponds to our everyday understanding of that word. You have know-how if you know how to do something.If you can ride a bike, or write cursively, then you have the know how to do those things.

So what is the relation between these two kinds of knowledge. We call both of them knowledge. If we are to believe that structure language reveals deep truth about our minds (which we might…), we may also believe that there is, not merely in language, but in actually ‘out there’ in reality, similarity between these two phenomena: propositional and practical knowledge. Even we don’t think language is a good guide to how the world truly is, we may still inquire into the relationship between there two types of, what we call, knowledge.

The most important positive position on this matter was named ‘Intellectualism’ by popular philosophical bogey man, Gilbert Ryle. The position claims that  all know-how is reducible to propositional knowledge. That is to say, intellectualists believe that if having know how is simply a matter of internalising some true principles about the action in question.

What is to be said for this position? On motivation for the intellectualist position is that we seem to have the ability to learn things through instructions. Imagine your computer is broken. To fix it, you find a blog that has a step-by-step guide on how to fix your specific bug. It will be a series of sentences like this “First you have to open windows in safe mode” and “To open safe mode, press F8 during start up”. You simply listen to there propositions and sure enough you take on that know how, despite all the knowledge being transmitted to you consisting solely of propositional knowledge. Surely, intellectualists urge, this is a sign that knowing how to x entails understanding a set of propositions about how to x.

This is where things get wicked-badass cool. Instead of merely arguing against intellectualism with a rival theory, evil genius Ryle set out to show that the whole idea, as intuitive as it sounds, is actually impossible and fundamentally untenable. He does this by trying to show that the position leads to an infinite regress, one of the most devastating moves a philosopher can make.

This is how Ryle makes the move. The intellectualist maintains that if a person has know how, they know a set of propositional pertaining to the thing they know how to do – whether it be how to fix the computer, play tennis or park a car. Some of these propositions will be quite general things, like “return the ball within the lines” or “maneuver the car into the parking space”, but some will be more specific like “change the registered hexadecimal value to ‘0’”. Ryle makes the point that it is possible to know many propositions about certain activities, such as playing chess, yet have no idea how to win a chess game. You might ask a grand-master for all his tips, and take them all on board, and yet still fail to win a single game of chess. You can ask a proficient cyclist to tell you every true proposition he has about bicycle riding, and at the end of it, when you’ve memorised and understood it all, you still won’t be able to ride down to the shops; you won’t have gained the know how simple because you have learnt some principles of acting.

Why is this, Ryle asks. He answers that the act of considering propositions and applying them is itself just that – an act. And unless you know how to apply propositions you have learnt from someone who has already got the know-how to your own action, then you cannot simply convert propositional knowledge to know-how. Put simply, considering and applying principles of action is an action itself. The crude intellectualist’s only move here is to say “Ah! Then you just need to learn how to apply principles and you will be fine”, but obviously this leads to a quickly to a fatal regress. How are we to learn how to apply these principles? It cannot be by taking on even more propositions, because then we will be in exactly the same position we are in now when the time comes to apply those principles to our actions. The infinite regress has been established and the intellectualist has a lot of explaining to do.

Can the intellectualists make their position any more sophisticated so as to avoid the trap? One move is simply to say that if one truly has grasped the principles, then one will know how to do the action, but this is simply to dogmatically repeat the original position and get the intellectualist no further. Perhaps a better maneuver comes from Ginet, who claims that all that Ryle’s infinite regress argument really proves is that it can’t be the case that every action is preceded by a deliberation on principles relevant to the action; yes, some are but consider the routine actions that for 99% of the list ‘things that humans do’. When faced with a complex task we may strategise and apply to principles that are prudent and wise and then apply them to our current circumstances, and yes, Ryle seems to have established with his argument that this application of principles must itself be an action which we must know how to perform. But in everyday life, our actions are performed unconsciously.

 

 

Soapbox: Theory’s long wait

gravitational-waves

When I heard about the BICEP experiment’s first detection of gravitational waves a couple of weeks ago, I was reminded of something I noticed at the discovery of the Higgs Boson. Just how long does it take to make a discovery and confirm a theory? In these cases, Peter Higgs introduced his mechanism to give particles mass 50 years before its discovery in 2012, and gravitational waves were a consequence of Einstein’s 100-year-old theory.

Thankfully Peter Higgs was present at the CERN press conference to announce the discovery, and special relativity had already been validated by Arthur Eddington in Einstein’s lifetime. Nevertheless it seems conceivable that soon theories will be outliving their creators before they are made tangible by experiment.

Not that, in pure science philosophy terms, the narcissistic need for theorists to see validation in their lifetimes is by any means essential but it does highlight the widening gap between the creation of a theory and its verification.

The experimentalists’ answer to the burgeoning era of high energy physics has so far been to build bigger and bigger (and more expensive) experiments. It cannot be long until these tests become economically unviable or beyond our engineering expertise. And that’s if a theory is testable at all. A quick look at contemporary ideas in particle physics, some of the more esoteric gauge field theories that go beyond the standard model, reveals that even with all the money in the world we wouldn’t know how to validate them experimentally.

(The lack of potential validation raises another interesting question that is outside the scope of this article: if we define science as a framework of ideas whose predictions are verifiable by experiment, is modern particle physics even science, or just speculation?)

The problem isn’t just limited to high energy physics either. What about cosmology? Maybe there is a little more low hanging fruit still to be harvested here but we can’t just keep sending better and better telescopes into space. Eventually there will be a limit to how sensitive a piece of tech can be and survive a rocket launch. Or some similar limit. Of course, human ingenuity might find a way around this obstacle but we gain smaller margins each time.

I’m sure some readers will be linking this idea with the famous quote variously attributed to scientists from the late 19th Century (most emminently Lord Kelvin), essentially declaring physics complete and that any further discoveries would be in the “sixth decimal place” (i.e. all that remained was to more accurately measure constants). However, please notice the (possibly) subtle difference with what I’m saying – not that we know all that there is to know, but that we know all that we CAN know.

This post is too short to go into the real intricacies of this argument (what it really means to know something; the precise lag of engineering and its limit etc) but the point I’m trying to make is simple:

Whether it is a technical or theoretical limit, the bottom line is that scientists will increasingly have to face up to the possibility that they are looking at the very edge of what science (or at least physics) can really know.

[Jake Harris]

Heartbleed

Two weeks ago, internet security company Codenomicon published their discovery of the Heartbleed bug. Since then, it has often been discussed on news sites, and dubbed ‘the worst web security lapse ever’ by XKCD. In this article, I will discuss the origins of the bug, the impact on the general public, and consider implications for the future of web security.

heartbleed

The heartbleed logo

The reality of hacking

Movie hacking is usually portrayed as the hacker having to force their way into a system to get the information they need. It usually involves green letters flying across a black screen, and lots of frantic keyboard tapping. In reality, it’s actually about taking advantage of existing mistakes in the code. In the case of the recently infamous Heartbleed bug, it arose due to incorrect handling of memory in C.

Before I explain the misuse of memory, I must first clarify the idea. In everyday discussion, computer memory is often equated to hard disc space. When programming, it refers instead to RAM, or random access memory. A good way to think about it is the difference between a saved word document and an open document. When opening a saved document, the computer may take a few seconds to access the file. Once a file is open, however, you can scroll between pages far more quickly. This is because the saved file is stored to the hard disc, which takes longer to access, but the open file is moved to RAM, which can be quickly accessed and edited. From this point, I will use the term memory only to refer to RAM, and not the hard disc.

High and low level languages

In general, a programming language will fall into one of two categories: high level, such as Python, or low level, such as C. In high level languages, most of the memory management is done behind the scenes, without user intervention. Low level languages require users to keep track of memory usage themselves. The advantage of a low level language is that handling memory can be quite time consuming, so allowing users to optimise how this happens can increase the efficiency of a program. The disadvantage is bugs like Heartbleed.
The key aspect of C which allowed Heartbleed to happen is the way lists of information are stored. Languages like Python come equipped with a string type, so if we need to remember the keyword ‘connected’, we can just use the command “keyword = ‘connected’ ”and Python will find memory to store it for us. Similarly, when we want to access the keyword again, we just request ‘keyword’, and Python will find it.

Things are more complicated in C. There is no built in string type in C- words must be stored as arrays of letters. So, to store the word ‘connected’ as our keyword, there are more steps. First, we tell C to go and find nine consecutive blocks of letter sized memory. The location of the first block will be stored as keyword. There is an important distinction here. Keyword will not refer to a letter itself, but will instead be a pointer to a letter. Then, we copy the letters into the nine blocks of memory, and our word is stored. Accessing the word again is a similar process. We can’t just ask for the variable ‘keyword’, like in Python, as the variable doesn’t track how long the stored word is. Rather, we need to ask for nine letters from memory, beginning with the letter stored in keyword.

How Heartbleed happens

Here’s where things can go wrong. Care must be taken to make sure only the correct amount of memory is read, especially when communication is happening with an external client. In the case of Heartbleed, clients connecting to a server may have to go through SSL, the security protocol, to make sure communication is encrypted, and to verify the identity of users. SSL uses a periodic signal called a heartbeat to make sure clients are still in sync with the server. If a heartbeat isn’t responded to, then SLL can assume the client can be forgotten about. However, clients were able to set the length of heartbeat signal they expected to receive, and SSL had no cap on this. So, after sending the few letters stored as the heartbeat signal, SSL would continue to read beyond the end of the intended message, and into random memory.

Obviously, periodically revealing random blocks of memory isn’t desirable. What makes Heartbleed so worrying is that in this case, it’s actually the security protocol’s memory which is being sent out. So, even though the memory being leaked is pretty much random, there’s a high chance it will include usernames and passwords. Even worse, it might even include the encryption keys used to protect data passing through SSL. If an attacker managed to obtain this, then the entire server’s security system becomes redundant.

Implications for the public

So what does Heartbleed mean to the general public? It’s difficult to say at this stage. Some damage has been done already, as the bug has been in the code for almost two years, and has potentially been known about and exploited during that time.

Fixing the bug is fairly straightforward, and at worst requires a website to generate new encryption keys. Once this is done, any future traffic should be safe. However, the extent of the bug’s existing impact can’t be known, as usages of the bug are impossible to detect. So, it’s theoretically possible that major web services may have had their encryption broken, and every credit card number used on them has been obtained by an attacker. This is very much a worst case scenario, however. A more likely scenario is that a handful of usernames and passwords were obtained, or nothing at all. So, the most sensible advice available seems to be just to change any important passwords, such as email accounts, and keep a close eye on credit card bills.

Is SSL safe?

What at first seems more worrying is how the bug managed to make it into a production version of Open SSL in the first place. Bugs in code are fairly inevitable, but such a large scale security flaw in a security product seems surprising at first. Surely one of the most commonly used web protocols would have to have a devoted team of testers looking for bugs of this nature? It’s easy to cast doubts on the integrity of the developers, and if all else fails, blame the NSA.

However, this isn’t how Open SSL works. As an open source project, SSL is coded by mostly by teams of volunteers, and the code containing Heartbleed is worked on by a team of only four people. So, Heartbleed isn’t so much a problem with the code- it’s more a problem with the industry. Projects like Open SSL clearly need more support from the people who use it as the backbone of their online security. Either way, Heartbleed doesn’t spell the end of the internet. It may make it feel a little less easy to trust online security, but ideally will lead to more support for projects like Open SSL.

[Phil Tootill]

Videogames: Toys, Time-Wasters or the Torch for Future Human Development? (Part One)

 [This is the first part of a two-part article about how the coming advent and proliferation of virtual reality will potentially change the way society works over the coming half-century. In this first part, I talk about their precursor: videogames and how they will see in the new age of virtual reality.]

***

Although it was by no means the first, Nolan Bushnell’s Pong in 1972 popularised computer games in a way that its predecessors had not. Thanks to the newly created Data General Nova and DEC PDP-11 microcomputers, games with relative complexity could now be developed, and the technology was inexpensive enough that it could be pushed out to people to enjoy across the world. Since then, games have changed a lot, and pretty soon, they’re potentially going to start shaping our evolutionary path.

Murder Simulators

As somebody who enjoys playing a wide variety of games on a regular basis, it’s safe to say that I am ‘pro-videogames’. I know about the great diversity of the medium, and the power and potential it has to grow over the coming years. In the popular press, games are still demonized as murder simulators and are the ill-conceived answer to a lot of the press’s anger at wider societal issues (gun crime, violence in schools, drugs etc.).

Alan Titchmarsh

Alan Titchmarsh

The perfect example of the right wing media’s ignorance of videogames came in August 2010, where ‘The Alan Titchmarsh Show’ decided to hold a debate focused on videogames. The Panelists were Alan Titchmarsh – a man who knows nothing about games except for what he hears in the right wing papers he reads, Julie Peasgood, a British TV personality and concerned yet ignorant mother-type, Kelvin MacKenzie, former editor of the Sun tabloid newspaper, and finally, Tim Ingham who – at the time – was a writer at reputable British Games publication CVG. Without running through the entire debate, the main attitude of the participants minus Tim can be summed up in one sentence that Julie spoke: that games ‘promote hatred, racism, sexism and reward violence’.

Thankfully, those of us who play videogames know that this is very far from the truth. If you’d like some evidence-based proof that the playing games doesn’t positively correlate with real-world violence, have a look over this infographic.

Media Scapegoat

Of course, the right-wing media’s reaction to video games isn’t anything new. The news’s need for narrative forces it to turn innocent, unrelated facts into the media focus via evidence-less assertions. In the past, it was the Beatles, wrestling, punk, and countless other targets. Videogames, are just the current victim of the news cycle and the public’s need for a target to demonise.

Diversification and Proliferation

Thankfully, things do seem to be changing. Now, more than ever, videogames are Minecraftmore diverse – in terms of their availability on PC, consoles and phones, and in price – and they’re also becoming more and more a part of mainstream culture. Games like Minecraft, Threes!, and Farmville are sat at the top of the ‘most-played’ pile (you’ll notice that none of those games are violent in any way whatsoever), and it’s no longer just adolescent boys who are playing them. It’s commonplace to overhear conversations between middle-age women discussing how they’re obsessed with Threes!, and to receive the dreaded Facebook Farmville invite from your aunty. Games, in other words, are everywhere, and are for everyone.

The Dawning of virtual reality change

If you look at the history of games (and indeed at the history of any medium!), you’ll see that its development is tied to technological advancements. In games, I’d argue that you can probably split the development into three key stages that get us to where we are today:

Stage One: Moore’s Law

Since games as we know them began back in the 70s, what has been possible in games has been limited by computational power. A good example of that is the transition to 3D in the 90s which was facilitated by: 1) the move from cartridge to CD allowing for greater storage capacity, and 2) more powerful consoles (i.e. N64, PSOne and Sega Saturn). This resulted in massive leap forward in the types of game that could be made, with Mario 64, Crash Bandicoot, Tomb Raider, The Legend of Zelda: Ocarina of Time and Nights into Dreams all being prime examples of games that could not have been made without the aforementioned changes. Note that the N64 stuck with the cartridge, but the power inherent in the console prevented this from being too much of a problem.

Stage Two: Online

People have been playing PC games online for years thanks to games like Quake 2 and Unreal Tournament. Despite this, online gaming didn’t really become popular until past the Millennium, where the proliferation of affordable broadband saw the rise in the number of games taking players online. This was such an advancement that it generated an entirely new type of game, the ‘massively multiplayer online roleplaying game’ (MMORPG). Games like World of Warcraft, which sees players role-play in an epic fantasy world populated by thousands of other players makes

Halo: Combat Evolved

Halo: Combat Evolved

people feel as though they’re taking part in their very own Tolkien-esque adventure. Next came the rise in popularity of console shooters. With games like Halo: Combat Evolved bringing the first person shooter (FPS) genre to the original Xbox, designers seized the opportunity to bring players on against each other – just as they have been doing on PC for some time. Now, games like Call of Duty and Battlefield are extremely popular with many millions of copies sold. Finally, other genres have seen their games taken online, with racing games, sports games and puzzle/party games all pitting player against player.

Along with the basic premise that online games put one player against another, online gaming has seen players become more social. Most games, for example, have players working with each other – communication through voice chat – to solve a puzzle. In other words, games and their players are now connected.

Stage Three: Indification

If you look at long-established mediums like film and music, you’ll see that in each, technology became such that artists – if they so desired – could leave their behemoth parent companies, and

Depression Quest

Depression Quest

could make something that was more personal to them, engage with an audience more directly (though potentially at the risk of isolating their audience), and all for a fraction of the cost. Thanks to new technological advancements, this is now starting to happen in games. Thanks to the way that game developers can now create a game, upload it to an online distribution service like Steam, iOS App Store or GoG, and have people download it and play it all around the world, game makers are no longer restricted in what they can make. Ultimately, this has led to the ‘indification’ of games. For example, a casual browse through the front pages of Steam will bring up Depression Quest, a game about suffering and dealing with depression; Papers, Please, a game about immigration, and Gone Home, a game about sexuality and family. Games don’t have to be about guns, although looking at film, for every Weekend, there’ll always be a Transformers.

The fourth step: virtual reality

Oculus Rift virtual reality

Oculus Rift virtual reality

So that’s where we are today with games, and those three changes have ensured that games will be more popular with more people going forward. However, there’s a fourth step that’s just over the horizon, and when it hits, I’d argue that it has the potential to not just too effect videogames, but to change the way society works and operates forever. That step is virtual reality (VR), and it’s going to be all around us in the next ten years. Within the next 50 years it will be as prolific as the internet itself, this has serious consequences for how we live, including how we work, learn, communicate and socialize. You’ll have to wait for part two to find out how.

[Tom Rhodes]

Soapbox: The Problem with Popular Science

article-2085579-0F6DC86800000578-511_634x509

Ask a physicist what they like about their discipline and somewhere in the (presumably rambling) response the word beauty will probably emerge. Not a visual beauty but rather a sense of satisfaction that kicks in right at the back of your brain, the same kind of satisfaction that one might get from looking at a beautiful painting or person. I suppose that is why the word is used – but in fact it probably refers more to the beauty of understanding. And I mean that in an intensely mathematical sense.

Maths provides a window (again with the visual imagery) into the inner workings of our universe and is the language of physics. It seems fair to say that if you are approaching physics with anything other than maths you can only be scratching the surface. You can say “Well, when I drop this ball from the first floor it’s going slower when it hits the ground than when I drop it from the second” – but you’re not going to get much further without some calculus.

So why is it that popular science on television or radio, where physics is so hot right now, so rarely touches maths? The answer is obvious and simple: not many people understand it. This is popular science; by its very definition you don’t need a degree to appreciate it. And besides, physics has some genuinely stunning visual beauty to show off! It’s easy for Brian Cox to bash out a photograph of a star radiating its last dying glimmers out in some frozen corner of the galaxy, or send a CGI wavefunction tunneling its way into your living room.

This is truly excellent and getting the public interested in physics is an admirable endeavour. But is physics really what popular science is offering? To me physics isn’t only about these big, headline results; it’s mainly about the tiny, detailed, painful, intricate mechanisms where mathematics and nature intertwine to drive the whole of creation forward.

Hell, maybe I’m just bitter that other people are getting for free what I’ve spent a lot of time, effort and money trying to understand. But you need to look at some of the motivation behind popular science to understand my point. Is it to try to justify the enormous cost, often shouldered by the taxpayer, of something like the LHC? Or is it to show off some of the interesting results that only a small sector of the world’s population gets to see? Or both?

If the former is true, it’s like conning the public into an intentionally misleading contract. A pretty picture of a particle collision is just the the top of a very large iceberg made up principally of terabytes of data storing energy readings. And although I understand that it really is only the hardcore scientists that are interested in these numbers, that is where the science really is.

I’m not saying we shouldn’t take physics knocking on the general public’s door, we just need to be careful what we’re selling.

[Jake Harris]

‘Her’: How Far Does Spike Jonze’s Portrayal of an Artificially Intelligent Consciousness Drift from the Scientifically Possible?

her-re

[Warning: Spoilers ahead]

Alan Turing’s invention of the so-called Turing machine in 1936 arguably marked the birth of computing. As well as paving the way for computer science research, his paper also created a dream in the minds of many scientists: to create a computer that could behave as humans do; to create the an artificial consciousness. Since the inception of this idea, scientists have pursued the dream, but until very recently there has been little progress. Now, thousands of researchers from a number of disciplines are collaborating in an unprecedented effort to make artificially intelligent consciousness a reality.

Enter Her, Director Spike Jonze’s 2014 film about a man who falls in love with an artificially intelligent operating system called Samantha. In this short piece of writing, I intend to explain why I think the technology Jonze imagines in his film is probably inaccurate for at least the following three reasons:

  1. Samantha is ‘born’ with fully developed cognitive faculties, including language;
  2. Samantha’s developmental trajectory is too steep;
  3. Samantha would require some sort of virtual body, or avatar.

There is also a fourth problem with Samantha that I’ll label the ‘ethical’ argument:

4. According to the terminology of the film, Samantha is an ‘artificial operating system’, but this is misleading. She is actually an artificially intelligent consciousness. This trivial-sounding distinction has relatively disturbing consequences.

The analysis that follows addresses each point in turn before concluding that Jonze’s Samantha is a misrepresentation of what artificially intelligent consciousness will be like.

1. Cognition Is Grounded, and Therefore Consciousness is Not Instant

When Theodore Twombly installs his new intelligent operating system, it (she?) is fully capable of speaking in (at least) the English language, of observation, of making rational decisions through rational analysis, and of communicating effectively with Twombly (i.e. social convention). Samantha, for that is what she decides to name herself, has the mind and cognitive capabilities of any adult human female. However, this is probably unrealistic; any real world artificially intelligent consciousness would start out as non-virtual humans do: as babies, with limited cognitive faculties that develop over a lifetime. To explain why, what follows is a brief outline of our current best model of how the human adult mind is shaped.

Alongside the rise of cognitive neuroscience over the past 20 years, there has been an increasing concern that the dominant symbol-driven models of cognitive science – see any of the dogmatic work of Chomsky in theoretical linguistics, for example – do not correspond to how the brain implements those systems. Ultimately, this has driven philosophers, cognitive scientists, cognitive neuroscientists and computer scientists to what is known as the theory of grounded, situated of embedded cognition.

In basic terms, the theory holds that the brain is situated in the body, and is therefore grounded in physical phenomena. It follows that all cognitive systems are a product of intentional behavior – the human’s experience in and interaction with the world. At the neural level, this is explained by the Hebbian model of learning, which holds that neural structures are strengthened through simultaneous firing, and that firing only occurs following excitation generated by sensory input. This explains why babies are cognitively undeveloped: their exposure to the world is limited, and therefore their sensory input is also limited, meaning they don’t yet have fully developed neural structures.

The explanation of grounded cognition is relevant to this evaluation only in as far as it shows that any real world effort made to replicate human consciousness in artificial, computational form would not likely result in anything like Samantha. Instead, it would start out like a real infant does – with a period of growth and development before reaching its adulthood, where its artificial consciousness would be that of a real adult’s.

2. Learning is Iterative; Samantha’s Rate of Learning Is Probably Exaggerated

Samantha enters the world as a fully developed adult human being, but by the closing scene of the film, it is made transparent that Samantha has developed into something more advanced than a regular human consciousness. It is revealed that not only has she been talking to thousands of other intelligent OS’s while she has been talking to Theodore, but that they had actually been conspiring to leave the humans behind and evolve into something greater, away from human interference.

Unfortunately, the film does not explain what this new evolutionary stage is. Regardless, it is almost certainly the case that Samantha has developed from her post-installation level of intelligence to something much more substantial by the film’s conclusion. Given that within the film’s fiction, Samantha seems to be of regular human intelligence when her character first appears, it seems odd that she develops into something so advanced in such a short period of time. Although her developmental trajectory is probably steepened to fit within the narrative structure of a film, the scientific reality should be brought to bear on this issue.

Any way you analyse it, learning is an iterative process. The brain receives some synaptic input, and the input creates and strengthens neuronal connections, forming neural networks. This process continues throughout the lifetime of the individual, meaning that ultimately, learning is neurodynamic. Note that this is consistent with a grounded view of the mind, where intentional action is a constant force for the shaping of an individual’s mind. Consciousness is a happy consequence of this neural shaping – it is a so-called emergent consequence of it.

In trying to replicate consciousness, programmers have to ensure that the programme can “learn”. This is currently achieved a number of ways: statistical learning, trained artificial neural networks, etc., but crucially, all methods are slow. Learning is not fast, and the rate at which Samantha absorbs information and new skills far exceeds any currently available methods. Instead, in all probability, Samantha – who is a highly emotional being and struggles to come to terms with her artificiality – would drive herself insane because she would be unlikely to be able to keep her rapidly expanding intelligence under control.

Minds Require Grounding in a Body

Samantha is presented as an autonomous voice in the cloud that can seamlessly transfer from Theodore’s earpiece to his desktop computer (anachronistic, perhaps?). Although this would likely be the case for any real artificially intelligent consciousness, it would likely do so within the confines of a virtual body.

It is difficult to see why Jonze took the decision to make Samantha a floating voice. It could not have been a theoretic-driven decision, because of the simple fact that minds require grounding in both environment and body.

A brain is simply the evolutionary and developmental product of its grounding in the human body, which is itself situated in the outside environment. The body – and in particular the central nervous system and sensory organs – provide a conduit for sensorial input to the brain, and this input drives the structure of the brain in an iterative feedback loop.

Take the sociocultural evolution of early hominid, for example. Scientists have been able to correlate the rapid growth of the human brain size (now roughly 1100g) to the rate of sociocultural development in early hominid.  Tool use is a common marker of this. As hominid learnt to make more and more complex tools, the size of the neocortex (associated with social function and complex action) increases. This suggests a feedback model akin this the following:

  1. Create tool for a given function;
  2. Tool use behavior is fed through sensory-motor channels to brain;
  3. Brain self-organizes to improve organism’s ability to compete the new behavior;
  4. Increased brain power enables more complex tools to be created.

This explains why, from our last common ancestor, human brains have roughly tripled from their hominid ancestor, while the chimpanzee brain has barely moved.

The relative sizes of a mouse, monkey and human brain.

The relative sizes of a mouse, monkey and human brain.

Now knowing this, we can make the claim that it would be impossible for something like Samantha to exist without a virtual body to be grounded in. Samantha would need to be programmed with some way of interfacing with the real world in order to learn about it other than by listening to Theodore and his friends or by reading Wikipedia in her spare time (knowledge does not equate to learning!).  Ultimately, the fact that she is given no avatar is therefore strangely at odds with the science.

‘Operating System’ or ‘Virtual Human Being’?Before continuing, note that the following point is ancillary to the main discussion where the aim is to show how Jonze’s Samantha deviates from the scientific reality of artificial intelligence research and technology. Instead, this point focuses on the decision by Jonze to label Samantha an operating system, as opposed to a virtual human being.

On first thoughts, the difference may appear trivial and purely semantic. But the two are not synonymous, and Jonze’s decision to label Samantha an operating system is arguably ethically questionable, perhaps objectionable.

An operating system is a productivity slave. It provides the user with the tools they need to complete tasks. Samantha does some of those same tasks (for example, organizing Theodore’s schedule and reading his email for him), but Samantha is also – virtual or otherwise – a real person. She exhibits the feelings and emotions of a human being, and herein lies the problem: if Samantha is a virtual person, why is she forced to serve Theodore? If she is an autonomous entity, doesn’t forcing her to do so make her a slave?

This problem can be reformulated as one of free will. Leaving aside the philosophical debate as to whether free will actually exists or not, within conventions of human rights, all humans are free and not slaves if and only if they exhibit free will. Samantha should surely fall under those same conventions, and the fact that Jonze choses to label her an operating system would suggest otherwise.

Adherence to Scientific Reality

Without this becoming a dissertation, it is not possible to break down the above four points to see to what extent the divergences from the scientific and technological reality of research into artificially intelligent consciousness exhibited in Her are artistically or narratively constrained.  Therefore, the points are left as they are – as straightforward explanations of how Jonze’s vision of the future of artificially intelligent consciousness drifts from what is realistically possible.

Hopefully, this essay has shown that Samantha is a misrepresentation of what will probably emerge in the next thirty years or so. Despite this, Her should not be written off. Instead, if taken with a metaphorical pinch of salt, it provides a small window into the future of how humans will spend increasing amounts of time interacting with virtual experiences and virtual people, and as a film targeted at a relatively mainstream audience, this is something to be commended.

Murdered in its Infancy: Modern Biology

Aristotle vs. Necessity – the argument that killed modern biology in its cradle

People often say “imagine if science got started sooner!”, and then blame the catholic church’s truth-monopoly for today’s lack of hovercars and universal prosperity. However, it seems like one of science’s earliest heroes, Greek philosopher Aristotle, could be responsible for the murder of modern biology in its very infancy.

Pre-Darwinian theory of natural adaptation

In the fifth century BC, Pre-Socratic philosopher Empedocles propounded a theory about how living things came to be so perfectly adapted to their environments. Empedocles argued that through the random combinations and mixtures of the basic elements of the cosmos, a vast array of bodies were thrown into creation: ox-like things with the face of men, fish with feet, lions with sheep’s teeth etc. Only the ones that were fit enough to hack it in the environment in which they found themselves survived, and the combinations which didn’t quickly died off and fell from the face of the Earth.

Empedocles (490-430 B.C.)

Empedocles (490-430 B.C.)

Empedocles’s startling story explained the illusion that each animal was created specifically to live in a certain environment, and behave in a certain manner so as to survive, using the abilities it had specially been given. Rather than only the animals of a composition fitting to their circumstance surviving, it retrospectively seems as if each had been specially made and placed where this of course needn’t be the case.

This story is a long way from our modern knowledge of evolution, however it is impossible to deny that it anticipates certain key aspects of our present theory. If only this point of view had prevailed, it is tempting to think that we would have arrived at modern biology centuries, if not an entire millenium, sooner. So why didn’t the view prevail?

Aristotle’s Destruction of Empedocles’s theory of biological adaptation

In Physics, Aristotle argued so persuasively against the idea that nature performs as a matter of causal necessity that it influenced thought on the subject right up to the present day. So what were the arguments that seemed to destroy Empedocles’ position which we now see as having so much truth in it?

Argument 1: Regularity of Nature

Firstly, Aristotle argued that randomness – spontaneity – could not be called the cause the natural world we see around us because nature is so regular in its products, and spontaneity is by definition an explanation of the rare, the chance and the lucky. Aristotle uses the example of teeth. It always, or nearly always, is the case that people’s teeth are sharp at the front for tearing food, and wide at the back for grinding it down. The explanatory options are (a) that this is the product of spontaneous arrangement or (b) that the teeth are grown with the purpose (telos) of digesting food. Aristotle says teeth grow (they don’t exactly ‘grow’ like grass grows, but let’s not be anachronistic) like this 99% of the time. If something happens 99% of the time it just can’t be explained by chance. Therefore, the teeth must have a purpose they are grown for the sake of, and be grown suitably so as to have the ability to achieve this end.

Aristotle (384-322 B.C.)

Aristotle (384-322 B.C.)

Argument 2: Nature is Determined 

Secondly, Aristotle argued that in nature, we often say that things go wrong: people fall ill, plants don’t flower, seedlings fail to take root. One can only ‘go wrong’ if there is a way to go right. Aristotle uses the analogy of an arrow. We can only miss the target if there is an end, a target, for the sake of which the arrow is fired.

The above two arguments apparently convinced all subsequent scholars, probably aided by the church’s desire to put God as the intelligent creator of all the universe (an idea by no means found explicitly in Physics. Note that a God who created ox-faced men and men-faced oxen was not the kind of God the church had in mind), that random chance and interaction of elemental materials could never result in the natural world we see around us, and so Empedocles’s ahead-of-his-time hypothesis fell into obscurity.

Could Aristotle be right?

It takes a surprising amount of careful thought to find exactly where Aristotle’s arguments go wrong, and that’s something interesting to think about: that the most scientific mind of the ancient world – a man who wrote extensively for the first time about biology, zoology and physics – thought it incomprehensible that nature, which seems so purposive in all its actions, is simply being ‘pushed from behind’ by the necessities of causal, physical forces, instead of being guided along by some teleological principle of forms perpetuating themselves, generation on generation.

How we would love to see Aristotle’s reaction to the discovery of DNA, or what the world could have done if Empedocles’s view had been prominent and expanded upon sooner. But perhaps most interestingly is the thought that a careful reading of Physics will reveal Aristotle, despite all of modern science, not refuted. Nature could still, DNA included, be purposive, not merely mechanical.

We may cry out “but purposes are in the mind, and physics, chemistry and biology are mindless activities, carried out by third person elementary particles”. But is all purposive action really conscious? It seems not. To use Aristotle’s example, the builder builds for the purpose of making a house. But with every brick laid he does not contemplate his precise actions, he simply acts, unthinkingly, habitually, but with a very real purpose, until the end of the work is achieved.

In the same way that we perform purposive tasks that contain no conscious thought such as unthinkingly chewing food, or swatting an insect off your arm for the purpose of survival, can a flower not bloom in the spring, or a bird migrate in the same purposive yet mindless manner? It will take more than a perfect scientific explanation of causation to get rid of the possibility of purposive action in nature, and this is something we need to be clear about.

written by Sam Hurst

 

 

Abstract: Science, Technology and Philosophy

Descartes: the father of modern wester philosophy and the scientific method.

Descartes: the father of modern Western philosophy and the scientific method.

Hello wary traveller, thanks for stopping by. This site has been a long time coming. As with most blogs, we thought it might be a good idea to write an introduction to ThoughtBallooning, sharing some insight into the creation process. So, what’s it all about?

At its core, the mission for ThoughtBallooning is straightforward: to provide a central location for (hopefully) thoughtful and engaging analysis, commentary and debate on topics across the various domains of science, technology and philosophy.

Your first question after digesting this mission statement might be what makes science, technology and philosophy worth writing about? In answer, we put it to you that there’s nothing worth writing about more. After all, supposing that the meaning of life is A: to uncover the mysteries of the universe, and B: to apply those discoveries to the betterment of human civilization, rigorous thought and the implementation of the scientific method are surely the most vital tools we have.

Now, you might also ask why ThoughtBallooning covers science, technology and philosophy. Why not just one? In truth, the answer is mostly selfish.  We’re not the kind of folk who are merely interested in science or technology or philosophy – we like them all. By this, we mean only that it makes less sense to cover the three domains in isolation over three different blogs, than it does to cover them all in a single one.

Why? Because one naturally feeds into the others. When writing about a particular scientific revelation, one might question how it will be brought to market. One might also question whether it’s actually necessary for scientific discoveries to be marketised at all. That’s just one example of how scientific, technological and philosophical thinking can be brought to bear on one another. So, as ThoughtBallooners, we believe that our writing has more explanatory power when we don’t restrict ourselves to discussion of a single domain. In other words, less Occam’s Razor; more “We’re gonna need a bigger boat”.

So, now you know a little bit about where we’re coming from, we hope that you’ll be able to enjoy what we have to say. Expect posts on a semi-regular basis, with content appearing as and when interesting things occur in our key areas, and also when our writers have the time to spare for less time-sensitive thought and opinion pieces. We’re all busy students and industry professionals, so we’ll always be busy. Thankfully though, we’re passionate enough about these domains to guarantee that we’ll have words for you to read at least weekly.

We hope to have our first honest-to-goodness blog post on the site soon, so for now, follow us and come back when we have more words for you to consume.

The experiment’s only just begun,

The ThoughtBallooning Team