Thoughts on… formality

The following is entirely based on observation…

Formality is an interesting one isn’t it? We, as humans, seem to resort to formal speech when we meet somebody new, or when we talk about business. But why?

It comes naturally to my mind that rapport can be built between two people if they match their tone of language, style of dress, and even breathing rates. This would lead on to the indication that those who have particular styles or ways-of-life, seemingly manage to connect quite naturally and are able to work together successfully. This is magnified when the people involved also share particular areas of knowledge and interest.

Formality, in this context, is about taking a particular form. It seems to me that “formality” is often associated with business environments. These environments require wearing a particular type of dress, “formal wear”, indicating a two or three piece suit, or some other kind of office wear. These environments also seem to bring out a type of language from people which is quite foreign from their day-to-day speech. This language, when fully employed, sometimes comes out in a robotic fashion.

As humans are naturally beings which are both intuitive and logical, both emotional and rational, it seems very odd to me that we would try to remove all forms of the intuitive and emotional, from speech, whether that speech is “day-to-day” or “formal”.

Surely it would be better if we tried to transact conversation however rapport would guide us. Providing (and advocating) a careful balance of tolerance and personal belief. Why enforce, or try to enforce, conformities on those to whom it does not come naturally?

I would imagine that both the enforcement of conformity (authoritarian method), and the truly open rapport method (anti-authoritarian method), could lead to what we call tribalism. Tribalism has its pros and cons. One of the largest “con” is hatred between tribes, another large “con” is that it is extremely difficult to build bridges between tribes. So if we were to implement the anti-authoritarian method of non-formality.

Such a topic also makes me think about the use of language. Is it truly ethical to enforce a particular style of language? Granted we have to actually teach the next generations how to use the language that we use in order that we can pass down concepts and history, but why are neologisms so frowned upon? Why are subtle modifications in language structure so frowned upon? Just by looking into the history of the English language you’ll find that it has changed quite a bit in the last 100 years, and is basically unrecognisable if you go back 1000 years. We even have regional differences - for example, I bet that it would annoy quite a few people to hear what is spoken in Bristol as “Warez ee to?”, which means “where is he?”, not only is the word “where” seemingly merged with the word “is”, but the “h” is dropped in “he”, and the word “to” is appended, which to some is more seemingly problematic than the word “at” which would sometimes be added in various other regions of the world. My point is to perhaps let it be, and let language evolve. In some cases regional dialects are not “new”, but have history longer than the authorised bibles.

One problem does come to mind though, which is learning a language. In which case it is useful to have a common basis.

Anyway, I have digressed quite a bit from my original topic. So I’ll end the post here.

Thoughts on… thinking and decisions

At the University we have weekly meetings for the Intelligent Systems Lab (ISL), that we cleverly call “LabMeets.” So, the topic today diverged from the usual talk on some aspect of computer intelligence, and was a brief talk followed by a short recording of an interview - the topic was essentially the Behavioural Economics of Daniel Kahneman. I was suitably impressed enough to write up some of my notes, and to exercise the liberal art of rhetoric. Although economics might at first seem quite distant from artificial intelligence, it is actually quite closely related.

As mentioned, the first section of today’s LabMeet was a brief overview of a paper by Daniel Kahneman and Amos Tversky entitled “Choices, Values and Frames” published in 1984 in American Psychologist. This paper showed that people tend to be “risk-averse” when the outcomes are seemingly positive, whereas “risk-seeking” when the outcomes are negative. A number of examples were given which highlight that given exactly the same scenario, the language of two different options has a direct relationship to the option chosen. This is a key aspect of decision-making, and therefore having an objective view allows for a more rational decision. It matters quite a lot to humanity, because it means that humans can very easily be manipulated, just through the use of language.

Kahneman also wrote Thinking, Fast and Slow. Which, as far as I am aware, goes into more detail about this particular theory. In the labmeet, we watched a youtube video of an interview with Kahneman regarding Thinking, Fast and Slow. Kahneman describes two systems which the brain uses:

  • System 1: Is, in essence, the part of the mental processes which includes intuition and subconscious thought. It is that “gut feeling,” and is our fast response unit.
  • System 2: Is, in essence, the more logical and rational mental processes. It (usually) takes more time to get a result from System 2 than it does from System 1, just because it takes time to calculate.

Considering one particular example. If somebody is on a short-term winning streak (e.g. in some kind of sport, or perhaps in playing the stock markets), our intuition (i.e. System 1) might tell use that that person is worth promoting or investing in. However, our rational mind (i.e. System 2) well tell us that statistically somebody that has been better than average for a lot longer (even if he/she is not currently having a winning streak), is better to promote or invest in. With this in mind System 2 is usually the better to go with.

However, we spend most of our lives, as humans, living in System 1 and it works for us most of the time. It is just that when the difficult decisions come, the result from System 2 will usually be the best decision. This is related to the regression towards the mean phenomenon.

What was also said was about leadership, and Kahneman mentions in the video that in terms of presidents of the USA; George Bush was more of a System 1 thinker, whereas Barack Obama is more of a System 2 thinker.

From my own point of view, I couldn’t help thinking that both System 1 and System 2 obviously have their pros and cons. I wonder, from a brain-improvement perspective, how this particular theory could help. Would it be possible to make our intuitions correct for more complex decisions (i.e. improving system 1 while retaining the speed), and would it be possible to improve our more rational brain by making our day-to-day lives more rational and making our rational thought incorporate the intuitive nature of our very being? It also leads me back to one of my previous blog posts where I consider how machines might deal with rationality and irrationality of humans.

Just some thoughts, and I’d be happy to hear yours…

Thoughts on… the art of memory, and art of brain work

What follows is largely a rambling of my current thoughts of art of memory/brain work. I don’t claim to be very knowledgeable on the subject, and certainly would not claim to be even remotely an expert on psychology or neuroscience - if there are readers of this post who are in the field, then I’d be more than happy to hear from you about this subject.

As every day passes I find myself treating my body and mind, not as me, but as a vessel for ‘me’. What do I mean by this? Well, if I see my body and my mind as a carrier for myself, then I could potentially treat such entities as mechanical devices. Mechanical devices have components, which can either be enhanced or replaced. When we think of machinery these days, we usually think of the materialistic technology (e.g. mobile phones, computers, televisions, cars and planes, and even things like prosthetics and robotics). This need not be so, machinery could also be biological, or even psychological.

So if we take our bodies, then exercise and diet are obviously two key parameters improving and enhancing it (or conversely degrading it!). However, we don’t often think of ways to improve our own minds. Granted when you go to school, college and university, then you do improve your knowledge, your “key skills” and your learning/researching abilities. You can also keep your mind active through the use of crosswords and other puzzles (e.g. Sudoku). There have also been “Brain Training” games that have appeared over the last decade or so, which improve mathematical, logical and visual capability… but is this enough?

In order to analyse whether this is “enough”, we need to consider the types of brain work, here are some of the areas that immediately come to my mind (and no doubt there are others)…

  1. Memory
  2. Mathematics
  3. Logics
  4. Creativity
  5. Language
  6. Sensory stimulation
  7. Hand-Eye Co-ordination (or to put it in mechanical terms: Actuator-Sensor Co-ordination)

It is “memory” that has particularly been on my mind recently, partly because I’ve had to memorise various things recently. So we usually get the distinction between:

  1. Long Term Memory
  2. Short Term Memory
  3. Also, sometimes, Muscle Memory

I’d like to think of slightly different categories for memory (there are probably more, and I could probably clarify them better than I have here, but…):

  1. Sequential Memory - where things (e.g. words, or symbols) are memorised in order, and they must be kept in order so as to maintain semantic and pragmatic integrity. Usually used for memorising scripts.
  2. Rule Memory - where A is associated with B, through some kind of rule or relation. Usually used for memorising concepts, or mathematics. Also an important concept for (Pavlovian) teaching/learning.
  3. Loci Memory (or Method of Loci), where concepts are stored sequentially or rule-based, against more memorable locations. Sometimes known as the “Memory Palace”
  4. Muscle Memory. Repetition can be associated to concepts, or more usually sequences. Consider the act of memorising a script, one word follows another, if it does so regularly then the physical act of moving ones mouth, can actually begin to materialise itself as a muscle memory without it entering into the sequential memory of the mind.

So I think that its quite important to exercise all these areas. Through both “order” (i.e. repetition of the usual), and “chaos” (i.e. unexpected memorisation which goes against what you’ve memorised). When exercising the memory, I find that it is usually both the memory itself AND the process of memorisation, that are important.

With memory exercised, I would say that other areas can then begin to become enhanced. Mathematics (arithmetic in particular) and Logic (and Sets), are key. Then its also important to get enough visual stimulation. Once visual stimulation occurs, then our connection between the visual, and our ability to manipulate the world, can begin to be improved (i.e. hand-eye co-ordination / Actuator-Sensor co-ordination). With Actuator-Sensor co-ordination exercised, we begin to see that communication is important, and so we can use our actuators to stimulate other peoples sensors, through the use of language and creativity. If we’re receiving language and creativity, then we make new memories, and the whole process of brain enhancement begins again.

Although far from a formalised and scientific method, I think that the above informal formula would be beneficial, and over the last few months I’ve tried to implement it (with a bit of success). We just have to treat our bodies and minds like the beautiful vessels that they are, and we will begin to see the benefits into the long term, both individually and in society.

Thoughts on…. Politics & Artificial Intelligence

Firstly, I’d like to draw your attention to an article written by my newspaper of choice (The Independent) entitled “Advances in artificial intelligence could lead to mass unemployment, warns experts.” This particular article was highlighted to me by my good friend Alex Blok.

It pains me that people are probably going to be pulled in to believing that artificial intelligence will only lead to mass unemployment. It simply is not necessarily the case! Before I start my post properly, I’d just like to highlight that I’m not an economist, but I am quite passionate and hopefully quite knowledgeable about both artificial intelligence and politics.

Firstly, humanity has been innovating ever since we’ve been Homo Sapiens. Innovation can be defined as finding new or better solutions for problems we encounter. One of the biggest problems innovation has attempted to solve is problematic health & safety when working. The wheel allowed one person to push a heavy object, when four people would have had to lift it previously. The wheel also lead into innovations such as pulleys. The industrial era attempted to simplify peoples jobs by providing automation, it then also gradually improved health and safety in those factories. So, the assembly line simplified the process of people putting together things (e.g. vehicles and electronic items) - eliminating some of the dangers, and many repetitions of doing things by hand. Each of these innovations, arguably, caused some unemployment (but not mass unemployment). At the same time it, arguably, allows for different jobs to be created.

Automation allows for the simplification of processing, which directly leads to a “freeing up” of costs. This single fact often means that positions in a business are no longer required, and the people in those positions are released - aiding in the “freeing up” of costs. There are now at least four choices about where this freed-up wealth now goes, (1a) it goes on creating new jobs within the business, or (1b) new avenues of business, (2) it goes to philanthropic projects, (3) it goes into paying off debt early, or (4) it goes into the pockets of the management of the business as they’ve been “clever” enough to employ such a solution.

I suspect that in contemporary society, with its ever increasingly capitalist stance it goes more into option (4) and option (3) than the other options (although there does seem to be some hint towards (1b) and (2) but to a much lesser degree).

Now we come to Artificial Intelligence. We’ve been employing Artificial Intelligence techniques ever since about the mid-1900s, where simple AI techniques allow for automated route discovery, automated pattern finding, automated quality assurance, speech-to-text assistance for the visually impaired, etc etc. There will continue to be advances in Artificial Intelligence which simplify human life. Whats different now to allow for such an unemployment worry? Partly, it is more widely known about, and this is thanks to the general public become a bit more technology-savvy and providing greater funds to technology businesses. Another potential reason for such a worry could be that the technological singularity is a possibility within the next 1 to 100 years (there are a variety of speculations), but I think this is a lesser reason for such a unemployment worry, and is more of a problem to existential risk if global unfriendly AI were to be created (but that is a completely different topic).

What needs to happen?

I think that the Future of Humanity Institute at the University of Oxford is correct that we need to start thinking about the risks which artificial intelligence imposes. Particularly as evolutionary algorithms are at such a stage that they could self-evolve at a greater pace than society can cope with. This risk research needs to feed directly into local, national and international governments which are going to have to change rather rapidly. We must keep in mind that freed-up wealth, instead of being fed into the pockets of business owners (or even authoritarian governments), could (and should!) be shared out into making humanity better - allowing for new/different jobs, increased quality of education and research, better health for all of humanity, genuine ecological improvements that are sustainable, and allowing for creativity within humanity to encounter new problems and create new innovations to solve those problems. We must do this with freedom, equality and community in mind.

So in summary. AI, like any other innovation, is not really a problem but a solution. What could be a problem however is the management of those solutions including corporate bosses, politicians and media. We need to collectively find solutions - Collectively being the whole of the community: whether employed, unemployed, management, politician or journalist. Hysteria and panic are not the way forward. Careful analysis and genuine support for humanity is the way forward.

 

Computational Intelligence Unconference UK 2014 - Announcement

Hi all,

I wanted to let you all know about an event that I am co-organising. Its an unconference (as some of you know I’ve organised unconferences before), on “Computational Intelligence”, in London (UK), on 26th July 2014. If you can be in the area on that date, then pop over to the CI Unconference UK 2014 website and get yourself a ticket. It’ll be an great day, full of wonderful talks and ideas, and lots of interesting people with different experiences.

More details below…

Daniel

Computational Intelligence Unconference UK 2014
BT Centre, London, UK
26th July 2014

Computational Intelligence Unconference UK 2014 is looking for attendees and speakers at the event on 26th July 2014. The unconference will bring people together with similar interests for networking and talks about Computational Intelligence and Intelligent Systems. The unconference will be held at the BT Centre in the St Paul’s area of London, England.

Free tickets and more information are available from the website. Space is limited, so get your free tickets as soon as you can from our website:

https://ciunconference.org/uk/2014/

The event is an “unconference”, which is an informal self-organising meeting with free (ticketed) entry, quite unlike a standard conference. An unconference is attendee-run, if you submit an idea you’ll get a slot in a first-come-first-served-basis timetable to talk about what you like, providing it is relevant to the general topic of Computational Intelligence.

This particular unconference will be suited to those people who use, or have an interest in, Computational Intelligence. Talks will have an element of theory and/or application. Topics include:

  • Fuzzy Set Theory and Fuzzy Logic,
  • Artificial Neural Networks,
  • Evolutionary Computing and Genetic Algorithms,
  • Connectionist Systems,
  • Autonomous Mental Development,
  • Hybrid Intelligent Systems,
  • Artificial General Intelligence,
  • Applications (e.g., Finance, Government, Telecommunications, Security, Open Data, Social Networking)

Organisers:

  • Daniel Lewis, University of Bristol
  • Stephen G Matthews, University of Bristol

Thoughts on… intervals and time

I’ve been thinking about intervals quite recently… I’ll start with a quick overview of intervals for everybody, and round of with talking about time (which will be a bit more advanced, including a bit of thought on fuzzy).

For those who don’t know what an interval is (mathematically), its quite simple really. If you have a continuous set, so the set of real numbers is a good example (they are “continuous” as they are infinitely definable as you can have infinite level of precision after the decimal point). An interval is everything between two points, those points can either be inclusive or not inclusive.

A square brace indicates that the number is inclusive, and a round bracket means it is not. So, examples:

  • [0.0, 1.0] indicates all real numbers between 0 and 1, inclusive of 0 and 1.
  • (0.0, 1.0] indicates all real numbers between 0 and 1, not inclusive of 0, but inclusive of 1
  • [0.0, 1.0) indicates all real numbers between 0 and 1, inclusive of 0, but not inclusive of 1
  • (0.0, 1.0) indicates all real numbers between 0 and 1, not inclusive of 0 nor 1

We can check whether a value (lets call it x), is within that interval simply by using comparative operators (<, <=, >= or >) which are chosen based on whether its inclusive or not inclusive.

The above is the essentials of mathematical intervals, and hopefully presented in such a way that anybody could understand it (without any kind of mathematical training).

One of the most natural ways that people use intervals is for anything regarding time. If we say “yesterday”, then we really mean the time interval [00:00 13th January 2014, 00:00 14th January 2014). Note the use of inclusive and not-inclusive boundaries. This particular interval has permanence (in the fact that it could always be labelled as “13th January 2014″), and it has a temporary label (in that during the interval represented by “14th January 2014″, it could be labelled “yesterday”). This particular theory of intervals regarding time is nothing new!

I am sure that some of my readers will be aware that James F. Allen formalised a system (in the 1980s) for using intervals for linguistic terms such as “before”, “after”, “during”, “meets”, “overlaps”, “starts”, “finishes” and “equates to”. This particular system became known as Allen’s interval algebra.

It seems to me that this formalised system of terms, has no requirement for probability theory (taken in its pure form, it has no statistics), but potentially exhibits some form of vagueness…

For example we could say “X happened before mid-day yesterday”, two questions arise from this one statement:

  • How do we define “mid-day”?
  • How long before “mid-day” did X happen?

Breaking the phrase up into things we are certain about, we would have:

  • “Before” is “A < B”, where A and B are intervals and the end boundary of A is before the start boundary of B.
  • “Yesterday” = Y = [00:00 13th January 2014, 00:00 14th January 2014)

We could define “mid-day” as M = [11:30, 13:30], but what if I actually meant between [12:00, 13:00]. If I said [11:30, 13:30], then X might have been “during” rather than “before”!

We also don’t know how long before M, X happened. Was it very long before, or shortly before, and what would these terms mean? How would they be represented in time?

These two particular questions indicate vagueness… vagueness that we, as humans, can happily reason with. Using fuzzy set theory we can also use machinery to handle such vagueness, without any kind of need for objective probability.

I’ve stumbled across a few academic papers which talk about fuzzy temporal interval algebra (based upon Allen’s interval algebra) - for example the work by Badaloni and Giacomin , and Schockhaert et al.

I’m foraging for anything else (theoretical or practical), which might be interesting. So I invite those people who know about this area to share in the comments. Hopefully we might be able to get some discussion going (either now or in the future).

This particular research has implications in the kind of research I’m doing for my PhD in fuzzy data mining, as well as my (currently) non-academic research in home robotics and Artificial General Intelligence.

FuzzBot - Part 2 - Photos

And, we have some photos of FuzzBot. Apologies for my awful photography skills.

Front-side view of FuzzBot version 1alpha
Top view of the FuzzBot
Close up of the arduino sitting on the back of the meccano body.

At the moment we’re sitting the arduino inside a plastic pot, which sits on top of the meccano base. Ideally we want to fix the arduino on to the meccano base properly, so that will be something we look at in the future. The ultrasonic sensor on the front is “hooked on” to a little platform that I made last night, and I covered that platform with some electrical tape to prevent any undesirable electrical faults.

For now though it all works as it should. Next stage is a bit more intelligent behaviour…

Thoughts on… rationality

I’ve recently become quite interested in the idea of the technological singularity, which is basically where artificial intelligence becomes more intelligent than human intelligence. What form this takes, and how we get there is not known, but it is not uncomprehendable that we accidentally or purposefully build an artificial general intelligence which evolves itself beyond the level of its creators intelligence.

That aside, I have watched a few of the talks from the Singularity Summit of 2012, and stumbled across one talk by Julia Galef (of CFAR) on “Rationality and the Future“. Rationality is an important on its own, but it has a special relationship with singularity theory. It seems to me (and those of you in this particular field, please do feel free to correct me), that rationality is important in singularity theory for the following reasons:

  1. Machines are programmed to be rational. Programming languages are based on mathematics - such as algebra, calculus, geometry and proof. It is this “proof” theory which allows us to test, and be confident that an algorithm (or whole software) will act in a certain way.
  2. Rationality allows us to define beliefs, intentions and desires (BDI). As humans, this has, or at least should have, an implication on the decisions we make and the actions we perform thereafter. The same stands for an artificial intelligence - in machine learning algorithms the results may or may not match up with reality or even rationality, and those decisions will lead into action for an intelligent agent. PEAS (Performance measure, Environment,. Actuators, Sensors) theory also comes to mind.
  3. Also from what I’ve seen in singularity topics, there is plenty of opinion. Some opinion is based on reasonable speculation, and some is based on pure guesswork. (Although it sounds as if expert opinion and non-expert opinion when it comes to singularity is somewhat similar in its estimations on when singularity will occur. See talk on How we’re predicting AI by Stuart Armstrong). This means that rational thinking is essential for humans to sort through the strong theories, and the weak theories. Having assumptions is something necessary as we don’t know everything, and those things that we do know exhibit levels of uncertainty and vagueness, but the important thing is to actually specify for any particular statement that you are taking such an assumption.

So the problem with the above is that almost every human is at least sometimes irrational. There are very few people that are able to live completely rationally. Uncertainties and vagueness permeates our understandings and our communications, not to mention that we do things wrong because of physical limitations (temporal or permanent). This is not necessarily always a bad thing - for example, when we fall in love (could be with a person, or a place), we might have our reasons for falling in love, but these reasons might not necessarily match up with science and mathematics, if they do then scientific and mathematical reasoning is not necessarily at the front of the mind of the human.

The talk by Galef, mentioned (and I am paraphrasing here) that one of her students came to her saying that he did not know whether to move away from family and friends in order to take a much higher paid job. To which Galef rephrased the problematic decision to being if you were already in that job, would you take a very big pay cut in order to move back to your family and friends. To which the answer was apparently “no”. Galef said that this rephrasing of the decision got around the problem of the Status Quo, in that people prefer to stay in a situation than move from it - even if it is the irrational option.

It is a good example, and rephrasing a decision can allow for more reasonable decision making. It also depends on how much we hone in on one or the other forms of decision. For example, in the decision about moving for a job, there could be an element of risk involved - the what-if’s could creep in, for example what if I don’t make friends, what if I lose the job, what if I am not comfortable in the place where I live. The level of risk might be too much for a rational move. In other words the level of risk is greater than the level of pay increase. Likewise risk can creep in to the inverse - if I stay where I am, then what if I lose my job, what if I lose my friends or upset my family, and what happens if my environment changes dramatically. The level of risk might be too much for a rational stay. We could also go into much more depth of reasoning, and actually give value to staying or going. This is turning the irrational into the rational… but do we always need to go into such depths of reasoning? Particularly as we’re sometimes irrational anyway, can we not hone our decisions without becoming so rational?

At the moment I don’t know the answer to this final question, or even know whether it is very important. What I do know is that this irrationality, or at least just the uncertainty and vagueness, is the reason why I became involved in and continue to be interested in Fuzzy Set Theory and Fuzzy Logic. Fuzzy attempts to model these shades of grey, allows for them to be reasoned, and does not have the requirement of definitive input or output. Probability theory is another area which helps with uncertainties, and I am very convinced that there is use for Fuzzy Probabilities and Possibility theory in Artificial Intelligence. Particularly if we combine such reasoning systems with knowledge bases (and that is where my knowledge of Semantic Web / Linked Data and Databases comes in handy).

These are just my initial thoughts on rationality for this blog, as I go along in my research into fuzzy theory and artificial intelligence I’m sure I’ll have more. Plus, I’m sure they’ll develop the more I consider singularity too.

Please feel free to comment.

FuzzBot part 1

Well, new blog, new style…

Some of you may know that I’m currently building a robot (with help from Beki). It has the following components so far…

  • An Arduino Mega 2560
  • An Arduino Motor Shield rev 3
  • An Arduino Ultrasonic Ping Sensor (HC-SR04)
  • Meccano for the shell
  • A Meccano motor base (with the remote control circuitry removed)
  • Batteries: 1x 9v (PP3), and 6x 1.5v (AA)
  • USB cable for connecting with the Laptop (at the moment I’m using Ubuntu Gnome 13.10 distribution of Linux operating system. The hardware is an Intel Core i7.)

At the time of writing, we’ve out together some very basic Arduino-C code which does the following (just for test purposes):

  1. Slowly inches forwards until it reaches 10cm away from an object, then…
  2. Reverses straight 10cm, then…
  3. Turns left, right, left while reversing for another few more seconds.

Today, I’ve refactored out the code which controls the sensors and actuators, along with some of the basic calculations, into a library. This means that I’ve written some C++ code, which is the first bit of C++ I’ve done for quite a few years. This will mean that I can easily create a new Arduino-C sketch and import my C++ library.

I’ve also added some very basic fuzzy commands, to do the following fuzzy rule “if too slow, then speed up a bit”. Which seems to work fine, but probably could do with a bit more tuning.

I plan to research on some data mining methods for collision detection in robotics, as data mining is an area of interest in my PhD I thought it would be appropriate to try to relate it somehow. The final plan is for FuzzBot to have some forms of (Fuzzy) Artificial Intelligence, which I hope will look reasonably organic to a viewer of this machine.

Keep an eye on this blog for more details. I hope to post some pictures when we’ve put the Meccano together in a better way…