Thoughts on…. Politics & Artificial Intelligence

Firstly, I’d like to draw your attention to an article written by my newspaper of choice (The Independent) entitled “Advances in artificial intelligence could lead to mass unemployment, warns experts.” This particular article was highlighted to me by my good friend Alex Blok.

It pains me that people are probably going to be pulled in to believing that artificial intelligence will only lead to mass unemployment. It simply is not necessarily the case! Before I start my post properly, I’d just like to highlight that I’m not an economist, but I am quite passionate and hopefully quite knowledgeable about both artificial intelligence and politics.

Firstly, humanity has been innovating ever since we’ve been Homo Sapiens. Innovation can be defined as finding new or better solutions for problems we encounter. One of the biggest problems innovation has attempted to solve is problematic health & safety when working. The wheel allowed one person to push a heavy object, when four people would have had to lift it previously. The wheel also lead into innovations such as pulleys. The industrial era attempted to simplify peoples jobs by providing automation, it then also gradually improved health and safety in those factories. So, the assembly line simplified the process of people putting together things (e.g. vehicles and electronic items) – eliminating some of the dangers, and many repetitions of doing things by hand. Each of these innovations, arguably, caused some unemployment (but not mass unemployment). At the same time it, arguably, allows for different jobs to be created.

Automation allows for the simplification of processing, which directly leads to a “freeing up” of costs. This single fact often means that positions in a business are no longer required, and the people in those positions are released – aiding in the “freeing up” of costs. There are now at least four choices about where this freed-up wealth now goes, (1a) it goes on creating new jobs within the business, or (1b) new avenues of business, (2) it goes to philanthropic projects, (3) it goes into paying off debt early, or (4) it goes into the pockets of the management of the business as they’ve been “clever” enough to employ such a solution.

I suspect that in contemporary society, with its ever increasingly capitalist stance it goes more into option (4) and option (3) than the other options (although there does seem to be some hint towards (1b) and (2) but to a much lesser degree).

Now we come to Artificial Intelligence. We’ve been employing Artificial Intelligence techniques ever since about the mid-1900s, where simple AI techniques allow for automated route discovery, automated pattern finding, automated quality assurance, speech-to-text assistance for the visually impaired, etc etc. There will continue to be advances in Artificial Intelligence which simplify human life. Whats different now to allow for such an unemployment worry? Partly, it is more widely known about, and this is thanks to the general public become a bit more technology-savvy and providing greater funds to technology businesses. Another potential reason for such a worry could be that the technological singularity is a possibility within the next 1 to 100 years (there are a variety of speculations), but I think this is a lesser reason for such a unemployment worry, and is more of a problem to existential risk if global unfriendly AI were to be created (but that is a completely different topic).

What needs to happen?

I think that the Future of Humanity Institute at the University of Oxford is correct that we need to start thinking about the risks which artificial intelligence imposes. Particularly as evolutionary algorithms are at such a stage that they could self-evolve at a greater pace than society can cope with. This risk research needs to feed directly into local, national and international governments which are going to have to change rather rapidly. We must keep in mind that freed-up wealth, instead of being fed into the pockets of business owners (or even authoritarian governments), could (and should!) be shared out into making humanity better – allowing for new/different jobs, increased quality of education and research, better health for all of humanity, genuine ecological improvements that are sustainable, and allowing for creativity within humanity to encounter new problems and create new innovations to solve those problems. We must do this with freedom, equality and community in mind.

So in summary. AI, like any other innovation, is not really a problem but a solution. What could be a problem however is the management of those solutions including corporate bosses, politicians and media. We need to collectively find solutions – Collectively being the whole of the community: whether employed, unemployed, management, politician or journalist. Hysteria and panic are not the way forward. Careful analysis and genuine support for humanity is the way forward.


Computational Intelligence Unconference UK 2014 – Announcement

Hi all,

I wanted to let you all know about an event that I am co-organising. Its an unconference (as some of you know I’ve organised unconferences before), on “Computational Intelligence”, in London (UK), on 26th July 2014. If you can be in the area on that date, then pop over to the CI Unconference UK 2014 website and get yourself a ticket. It’ll be an great day, full of wonderful talks and ideas, and lots of interesting people with different experiences.

More details below…


Computational Intelligence Unconference UK 2014
BT Centre, London, UK
26th July 2014

Computational Intelligence Unconference UK 2014 is looking for attendees and speakers at the event on 26th July 2014. The unconference will bring people together with similar interests for networking and talks about Computational Intelligence and Intelligent Systems. The unconference will be held at the BT Centre in the St Paul’s area of London, England.

Free tickets and more information are available from the website. Space is limited, so get your free tickets as soon as you can from our website:

The event is an “unconference”, which is an informal self-organising meeting with free (ticketed) entry, quite unlike a standard conference. An unconference is attendee-run, if you submit an idea you’ll get a slot in a first-come-first-served-basis timetable to talk about what you like, providing it is relevant to the general topic of Computational Intelligence.

This particular unconference will be suited to those people who use, or have an interest in, Computational Intelligence. Talks will have an element of theory and/or application. Topics include:

  • Fuzzy Set Theory and Fuzzy Logic,
  • Artificial Neural Networks,
  • Evolutionary Computing and Genetic Algorithms,
  • Connectionist Systems,
  • Autonomous Mental Development,
  • Hybrid Intelligent Systems,
  • Artificial General Intelligence,
  • Applications (e.g., Finance, Government, Telecommunications, Security, Open Data, Social Networking)


  • Daniel Lewis, University of Bristol
  • Stephen G Matthews, University of Bristol

Thoughts on… intervals and time

I’ve been thinking about intervals quite recently… I’ll start with a quick overview of intervals for everybody, and round of with talking about time (which will be a bit more advanced, including a bit of thought on fuzzy).

For those who don’t know what an interval is (mathematically), its quite simple really. If you have a continuous set, so the set of real numbers is a good example (they are “continuous” as they are infinitely definable as you can have infinite level of precision after the decimal point). An interval is everything between two points, those points can either be inclusive or not inclusive.

A square brace indicates that the number is inclusive, and a round bracket means it is not. So, examples:

  • [0.0, 1.0] indicates all real numbers between 0 and 1, inclusive of 0 and 1.
  • (0.0, 1.0] indicates all real numbers between 0 and 1, not inclusive of 0, but inclusive of 1
  • [0.0, 1.0) indicates all real numbers between 0 and 1, inclusive of 0, but not inclusive of 1
  • (0.0, 1.0) indicates all real numbers between 0 and 1, not inclusive of 0 nor 1

We can check whether a value (lets call it x), is within that interval simply by using comparative operators (<, <=, >= or >) which are chosen based on whether its inclusive or not inclusive.

The above is the essentials of mathematical intervals, and hopefully presented in such a way that anybody could understand it (without any kind of mathematical training).

One of the most natural ways that people use intervals is for anything regarding time. If we say “yesterday”, then we really mean the time interval [00:00 13th January 2014, 00:00 14th January 2014). Note the use of inclusive and not-inclusive boundaries. This particular interval has permanence (in the fact that it could always be labelled as “13th January 2014″), and it has a temporary label (in that during the interval represented by “14th January 2014″,  it could be labelled “yesterday”). This particular theory of intervals regarding time is nothing new!

I am sure that some of my readers will be aware that James F. Allen formalised a system (in the 1980s) for using intervals for linguistic terms such as “before”, “after”, “during”, “meets”, “overlaps”, “starts”, “finishes” and “equates to”. This particular system became known as Allen’s interval algebra.

It seems to me that this formalised system of terms, has no requirement for probability theory (taken in its pure form, it has no statistics), but potentially exhibits some form of vagueness…

For example we could say “X happened before mid-day yesterday”, two questions arise from this one statement:

  • How do we define “mid-day”?
  • How long before “mid-day” did X happen?

Breaking the phrase up into things we are certain about, we would have:

  • “Before” is “A < B”, where A and B are intervals and the end boundary of A is before the start boundary of B.
  • “Yesterday” =  Y = [00:00 13th January 2014, 00:00 14th January 2014)

We could define “mid-day” as M = [11:30, 13:30], but what if I actually meant between [12:00, 13:00]. If I said [11:30, 13:30], then X might have been “during” rather than “before”!

We also don’t know how long before M, X happened. Was it very long before, or shortly before, and what would these terms mean? How would they be represented in time?

These two particular questions indicate vagueness… vagueness that we, as humans, can happily reason with. Using fuzzy set theory we can also use machinery to handle such vagueness, without any kind of need for objective probability.

I’ve stumbled across a few academic papers which talk about fuzzy temporal interval algebra (based upon Allen’s interval algebra) – for example the work by Badaloni and Giacomin , and Schockhaert et al.

I’m foraging for anything else (theoretical or practical), which might be interesting. So I invite those people who know about this area to share in the comments. Hopefully we might be able to get some discussion going (either now or in the future).

This particular research has implications in the kind of research I’m doing for my PhD in fuzzy data mining, as well as my (currently) non-academic research in home robotics and Artificial General Intelligence.

FuzzBot – Part 2 – Photos

And, we have some photos of FuzzBot. Apologies for my awful photography skills.

Front-side view of FuzzBot version 1alpha
Top view of the FuzzBot
Close up of the arduino sitting on the back of the meccano body.

At the moment we’re sitting the arduino inside a plastic pot, which sits on top of the meccano base. Ideally we want to fix the arduino on to the meccano base properly, so that will be something we look at in the future. The ultrasonic sensor on the front is “hooked on” to a little platform that I made last night, and I covered that platform with some electrical tape to prevent any undesirable electrical faults.

For now though it all works as it should. Next stage is a bit more intelligent behaviour…

Thoughts on… rationality

I’ve recently become quite interested in the idea of the technological singularity, which is basically where artificial intelligence becomes more intelligent than human intelligence. What form this takes, and how we get there is not known, but it is not uncomprehendable that we accidentally or purposefully build an artificial general intelligence which evolves itself beyond the level of its creators intelligence.

That aside, I have watched a few of the talks from the Singularity Summit of 2012, and stumbled across one talk by Julia Galef (of CFAR) on “Rationality and the Future“. Rationality is an important on its own, but it has a special relationship with singularity theory. It seems to me (and those of you in this particular field, please do feel free to correct me), that rationality is important in singularity theory for the following reasons:

  1. Machines are programmed to be rational. Programming languages are based on mathematics – such as algebra, calculus, geometry and proof. It is this “proof” theory which allows us to test, and be confident that an algorithm (or whole software) will act in a certain way.
  2. Rationality allows us to define beliefs, intentions and desires (BDI). As humans, this has, or at least should have, an implication on the decisions we make and the actions we perform thereafter. The same stands for an artificial intelligence – in machine learning algorithms the results may or may not match up with reality or even rationality, and those decisions will lead into action for an intelligent agent. PEAS (Performance measure, Environment,. Actuators, Sensors) theory also comes to mind.
  3. Also from what I’ve seen in singularity topics, there is plenty of opinion. Some opinion is based on reasonable speculation, and some is based on pure guesswork. (Although it sounds as if expert opinion and non-expert opinion when it comes to singularity is somewhat similar in its estimations on when singularity will occur. See talk on How we’re predicting AI by Stuart Armstrong). This means that rational thinking is essential for humans to sort through the strong theories, and the weak theories. Having assumptions is something necessary as we don’t know everything, and those things that we do know exhibit levels of uncertainty and vagueness, but the important thing is to actually specify for any particular statement that you are taking such an assumption.

So the problem with the above is that almost every human is at least sometimes irrational. There are very few people that are able to live completely rationally. Uncertainties and vagueness permeates our understandings and our communications, not to mention that we do things wrong because of physical limitations (temporal or permanent). This is not necessarily always a bad thing – for example, when we fall in love (could be with a person, or a place), we might have our reasons for falling in love, but these reasons might not necessarily match up with science and mathematics, if they do then scientific and mathematical reasoning is not necessarily at the front of the mind of the human.

The talk by Galef, mentioned (and I am paraphrasing here) that one of her students came to her saying that he did not know whether to move away from family and friends in order to take a much higher paid job. To which Galef rephrased the problematic decision to being if you were already in that job, would you take a very big pay cut in order to move back to your family and friends. To which the answer was apparently “no”. Galef said that this rephrasing of the decision got around the problem of the Status Quo, in that people prefer to stay in a situation than move from it – even if it is the irrational option.

It is a good example, and rephrasing a decision can allow for more reasonable decision making. It also depends on how much we hone in on one or the other forms of decision. For example, in the decision about moving for a job, there could be an element of risk involved – the what-if’s could creep in, for example what if I don’t make friends, what if I lose the job, what if I am not comfortable in the place where I live. The level of risk might be too much for a rational move. In other words the level of risk is greater than the level of pay increase. Likewise risk can creep in to the inverse – if I stay where I am, then what if I lose my job, what if I lose my friends or upset my family, and what happens if my environment changes dramatically. The level of risk might be too much for a rational stay. We could also go into much more depth of reasoning, and actually give value to staying or going. This is turning the irrational into the rational… but do we always need to go into such depths of reasoning? Particularly as we’re sometimes irrational anyway, can we not hone our decisions without becoming so rational?

At the moment I don’t know the answer to this final question, or even know whether it is very important. What I do know is that this irrationality, or at least just the uncertainty and vagueness, is the reason why I became involved in and continue to be interested in Fuzzy Set Theory and Fuzzy Logic. Fuzzy attempts to model these shades of grey, allows for them to be reasoned, and does not have the requirement of definitive input or output. Probability theory is another area which helps with uncertainties, and I am very convinced that there is use for Fuzzy Probabilities and Possibility theory in Artificial Intelligence. Particularly if we combine such reasoning systems with knowledge bases (and that is where my knowledge of Semantic Web / Linked Data and Databases comes in handy).

These are just my initial thoughts on rationality for this blog, as I go along in my research into fuzzy theory and artificial intelligence I’m sure I’ll have more. Plus, I’m sure they’ll develop the more I consider singularity too.

Please feel free to comment.

FuzzBot part 1

Well, new blog, new style…

Some of you may know that I’m currently building a robot (with help from Beki). It has the following components so far…

  • An Arduino Mega 2560
  • An Arduino Motor Shield rev 3
  • An Arduino Ultrasonic Ping Sensor (HC-SR04)
  • Meccano for the shell
  • A Meccano motor base (with the remote control circuitry removed)
  • Batteries: 1x 9v (PP3), and 6x 1.5v (AA)
  • USB cable for connecting with the Laptop (at the moment I’m using Ubuntu Gnome 13.10  distribution of Linux operating system. The hardware is an Intel Core i7.)

At the time of writing, we’ve out together some very basic Arduino-C code which does the following (just for test purposes):

  1. Slowly inches forwards until it reaches 10cm away from an object, then…
  2. Reverses straight 10cm, then…
  3. Turns left, right, left while reversing for another few more seconds.

Today, I’ve refactored out the code which controls the sensors and actuators, along with some of the basic calculations, into a library. This means that I’ve written some C++ code, which is the first bit of C++ I’ve done for quite a few years. This will mean that I can easily create a new Arduino-C sketch and import my C++ library.

I’ve also added some very basic fuzzy commands, to do the following fuzzy rule “if too slow, then speed up a bit”. Which seems to work fine, but probably could do with a bit more tuning.

I plan to research on some data mining methods for collision detection in robotics, as data mining is an area of interest in my PhD I thought it would be appropriate to try to relate it somehow. The final plan is for FuzzBot to have some forms of (Fuzzy) Artificial Intelligence, which I hope will look reasonably organic to a viewer of this machine.

Keep an eye on this blog for more details. I hope to post some pictures when we’ve put the Meccano together in a better way…