Tag Archives: artificial intelligence

CIUUK15: Less than 2 Weeks to go!

A Message from Daniel Lewis - Chief Co-organiser of the Computational Intelligence Unconference:

Computational Intelligence Unconference UK 2015
#CIUUK15
15th August 2015 : 10am - 6pm : Strand Campus, Kings College London

There are less than two weeks to go until the event, and everything is full steam ahead!

We’ve got so many high quality speakers we almost ran out of space on the schedule for them. Rohit Talwar and Prof. Murray Shanahan will be opening the unconference with a discussion about the future of artificial intelligence from a business and societal point of view. We’ve got a section on human-computer interaction with Blaise Thomson and James Ravenscroft. We’ve got a section on Smart Cities and the Internet of Things with David Beeton and Rajesh Bhardwaj. We’ve got a section discussing the practicalities of computational intelligence with Marcelo Funes-Gallanzi, Dr Leandro L. Minku and Peter Morgan. We’ve also got a health and medicine section with Pam Yoder, Hari Ponniah and Dr Yonghong Peng. Plus we’re also putting together a panel of politics/government experts from across the political spectrum to discuss the impact of AI and technology on policy.

It really will be an immensely interesting day, and we are very excited. We hope you are too. Have a look at the schedule on the website.

So far we have three sponsors confirmed: The Goodwill Company as Platinum Sponsors, AVNTK as Bronze Sponsors and ContactSingapore as tea&coffee break sponsors. We have also had a good number of you provide a little bit of crowdfunding sponsorship, which really does help! Plus we’ve had a couple of other offers recently which are yet to confirm. However, we are still short, and so if you personally or if your organisation can help then please do let us know. We are a non-profit organisation, and we don’t charge for admission, and so we are reliant on the generosity of sponsors. The email address is at the bottom of this email if you wish to get in touch about this.

 

You may also want to consider getting yourself a T-Shirt, and profits go towards the venue and catering costs. Get your T-Shirts here: https://www.tboom.co.uk/ciuuk15

CIUUK15 T-Shirt

There is still much to do. We will be updating the website over the next few days with the most recent information, and we will be responding to many emails which came through over the past few days about the event. (Apologies if you’re one of the people waiting for a response!). We will also try to put a list of nearby hotels on the website, just in case you’re staying in the area.

What can you do? Well, make sure you can still come. It’ll be on the 15th August 2015, and we will start at 10am - so try to arrive 15-20 minutes earlier. But no problems if you can only be there part of the date. If you booked multiple tickets, then please also make sure that you can still bring those people, and I will be asking for names of your plus-ones shortly before the event. Come to the event with ideas, come to share, and come with a problem-solving hat on.

The website is: https://ciunconference.org/uk/2015/ & the hash-tag is: #CIUUK15

Any questions then please do send them in my direction, I’d be happy to respond.

See you there!

Daniel Lewis

[email protected]

Bilderberg Meeting 2015 - AI and Cyber-Security

From the 11th June until the 14th June, the Bilderberg Meeting 2015 takes place in Austria. This annual private meeting brings together people from government, from business, from academia and from think-tanks to talk about topical issues around the world. I’ve not had an invite this year, but I’d certainly be willing to attend some time if an invite did come through.

This years Bilderberg Meeting is particularly interesting to me as two of the topics are quite close to my heart. That is:

  • Artificial Intelligence, and,
  • Cyber-security

Many of my readers will know that my current PhD research, at the University of Bristol Intelligent Systems Laboratory, involves the application of data-mining and knowledge-engineering (both forms of Artificial Intelligence) to security, co-funded by British Telecom (BT) and the EPSRC.
Plus, in August 2015 I’ll be starting a Senior Research Assistant position at the University of South Wales, in their Information Security Lab, to begin research/consultancy/teaching in intelligent cyber-security of knowledge-bases.
Not forgetting my pre-PhD industry background in knowledge bases (inc. Semantic Web, Linked Data and Open Data), and also my founding of and continuing involvement in the Computational Intelligence Unconferences.

However, I wanted to highlight the who’s-who in AI and Cyber-Sec at this years Bilderberg Meeting:

  • Zoë Baird, CEO and President of Markle. A consultant in the realms of cyber-security and healthcare. In my opinion she would a great candidate for the Computational Intelligence Unconference.
  • Franco Bernebè. He has a lot of interest in ICT, Telecoms and also renewable energy. No doubt that Franco will have an interest in hearing the latest details of Artificial Intelligence and Cyber-security, and would probably be able to give some valuable insight to the unfortunately-closed-door Bilderberg Meeting.
  • Patrick Calvar, French Internal Security, seems to have an interest in surveillance. He’ll have his own experience in surveillance, both online and offline.
  • Ann Dowling. Although she is not involved in artificial intelligence or cybersecurity (she is in mechanical engineering), she is the current head of the Royal Academy of Engineering here in the United Kingdom - which has an interest in both AI and cyber-sec.
  • Regina Dugan, Vice President for Engineering, Advanced Technology and Projects at Google. Not much to say about this, other than she work with Ray Kurzweil at Google. Google are well-known for being researchers and developers of advanced robotics, advanced data mining techniques and all kinds of other things, including our next entry who is also at the Bilderberg meeting this year…
  • Demis Hassabis, a strong researcher in all things relating to strong AI, connectionist AI (including “deep mining”) and neuro-science. He started DeepMind which was acquired by Google last year. He’ll certainly be able to provide an academic perspective to the meeting.
  • Wolfgang Hesoun, CEO of Seimens Austria. Seimens has a keen interest in Cyber-security, and also (although slightly less so) artificial intelligence.
  • Reid Hoffman, co-founder of LinkedIn and entrepreneur in the IT industry. LinkedIn has an interest in data mining and data storage, and I am sure that Reid will be able to provide interesting insight from a business-social-media perspective on both AI and cyber-sec. He actually started his career in user-experience architecture, so I imagine that he has the technology knowledge to back-up his business head. Back in the day he was also involved with PayPal, and more recently is a “Board Observer” of a bitcoin technology company, which will obviously have cyber-security interests.
  • Wolfgang Ischinger is the chair of the Munich Security Conference, and a German Diplomat involved in Security of all forms. There is a Cyber-security activities section of the Munich Security Conference, assisting with a Cyber Security Summits.
  • Alex Karp, CEO of Palantir. Palantir are heavily involved in both artificial intelligence and cyber-security, they do a lot of contract work with both the private and public sectors. Interestingly Alex’s PhD was in “neoclassical social theory”
  • Konrad Kogler, Director General of Public Security in Austria. Coming from a policing background Konrad probably won’t be too “hot” on cyber technologies, but he’ll have a general interest in it, and it would be interesting to hear how the police fit in with certain aspects.
  • André Kudelski, Chair/CEO of the Kudelski group which is involved in digital TV, in physical-access systems and in cyber-security. André has a background in R&D and Engineering, so I suspect he’ll know his technology.
  • General Jim Mattis, Visiting Fellow at the Hoover Institute at Stanford University. Has a strong interest in all things security, and the experience to boot. Goodness knows if he has any experience of cyber-defence, but I’m sure he’ll have an opinion on it. Also note that he is at Stanford University, which is very well known for its computer science (including artificial intelligence) - does he have any insight into other projects at Stanford?
  • Pierre Maudet, “Vice-President of the State Council, Department of Security, Police and the Economy of Geneva”, he is a social liberal (but an economic conservative) and also a member of a Ecology/Green think-tank. He is one of the council members in charge of security for Switzerland, which is well known as being one of the most secure countries on the planet. It would be interesting to hear what he has to say.
  • Jim Messina of The Messina Group. The Messina Group pride themselves at being “data-driven strategists”. They worked on the Barack Obama campaigns, and they’ve recently crossed the political spectrum and the ocean to work with the Conservative Party here in the United Kingdom. They do a lot of data analysis, and so I am sure that Jim will be able to provide some insight into how data can be shaped-by, and shape, society.
  • Peter Thiel, co-founded PayPal with Max Levchin and the very famous Elon Musk. Co-founded Palantir (of which Alex Karp will also be in attendance at the Bilderberg meeting). He funds various businesses, social ventures, philanthropic adventures and interesting research. He has funded much work on Artificial Intelligence via the Machine Intelligence Research Institute (MIRI). Economically he is libertarian, but he seems to be somewhat socially liberal, and in general seems to be quite a nice person from what I can tell, and what I have heard from my friends and contacts who have met him. He is also involved in things such as longevity research, technological singularity and human sustainability.

These are just some of the many Participants to the Bilderberg Meeting of 2015. It is a shame that the result of the meeting is not public, as it would be very very interesting to see what was discussed and what opinions these humans take. Perhaps someday I’ll be invited and will find out for myself, but even if that happened I would probably be sworn to secrecy. I guess at the moment we can only speculate, and see what happens over the next few months and years.

I would, however, like to invite those listed above, or those involved in various companies I have listed, or anybody else interested to come along to the Computational Intelligence Unconference UK 2015 (CIUUK15). We are looking for attendees, for speakers and for sponsors. Any way that you can help will be appreciated, just contact me. CIUUK15 will happen at Kings College London on the 15th August 2015. Perhaps we can have our own Bilderberg-style meeting at the unconference, just a bit more open. We certainly have people attending who are at the cutting-edge of their fields, along with people in academia, in business and in the public sector.

Daniel Lewis
* My Computational Intelligence Unconference Email Address: daniel <<at>> ciunconference <<dot>> org

CIUUK15 Update

I’ve just sent out a message (similar to what is below) to our Computational Intelligence Unconference UK 2015 attendees…

At the time of writing we have 109 days to go (event is on 15th August 2015), we have 100 people already registered (our capacity is over 200) - 7 of which have given us some crowdfunding, we have 5 speakers already confirmed (with much more time and space for additional talks/workshops). A very big thank you to those who have registered, and if you haven’t registered yet, then go do it now (website: https://ciunconference.org/uk/2015 )

But we need help! Here is how you might be able to help:

(1) Social Networking: We need help to get the word out about the event. If you have a twitter, facebook, linkedin or other social network account then it would be great if you could advertise the event. You can use our official Short-URL bit.ly/ciuuk15 and our official tag #CIUUK15

(2) Sponsorship: We are in urgent need of sponsorship. We’ve got to raise funds to cover the cost of the venue and the food & drink. We are doing our best to keep costs down and get the best deals. Ideally we need a few corporate sponsors, and preferably quite soon, however smaller organisations and personal donations will be very welcome (including crowdfunding offers). If you work for an organisation who could offer some sponsorship in return for marketing/advertising and bespoke audience-engagement, or if you could personally offer to cover the costs of attendance (which is roughly £60 a head), then please let me know as soon as possible. Even if you have a contact in a related company which might be interested in helping us out, then let them know and let me know. I will reiterate, we are non-profit, and are entirely organised by volunteers, and all incomings will go straight into venue/catering costs. The event is heavily dependent on the generosity of our sponsors and volunteers.

(3) Meet-ups/Hackathons: We have a seminar room, and we’re keen on having guest meet-ups and guest hackathons use the space for an hour or two each. So, if you lead or are a part of a (related) meet-up or hackathon, then please get in touch.

(4) Tutorials/Workshops: The same seminar room could also be used by a business or organisation for a tutorial/seminar/workshop. However, we may ask for a donation if the business is for-profit. Feel free to email me to find out more.

(5) Talks: We are also in need of more talks. Short talks and Long talks. If it’s a talk by a business then the business might also want to think about helping to sponsor the event. If its a talk from a personal perspective, or a very technical perspective, then the talk can be done freely (libre et gratis).

(6) Volunteering: We will need on the day volunteers. Volunteers will help manage the rooms and the microphones, and will help give out the badges/lanyards at start. We also need people to: bring cameras (still and moving); to help live tweet the event; and to blog before/during/after the event.

Contact me now if you can help. My email address is:
daniel [at] ciunconference [dot] org

More information about the event is on our website:
https://ciunconference.org/uk/2015
Kings College, London - 15th August 2015 - 10:00-18:00

On behalf of the organisation team, thank you for your interest, thank you (in advance) for your help, and to those of you attending, I look forward to seeing you on the 15th August 2015.

Daniel Lewis
* Chief Co-organiser of the Computational Intelligence Unconference UK 2015
* Founder & Chair of the Computational Intelligence Unconference Association (a Non-profit Unincorporated Association)
- Email: daniel [at] ciunconference [dot] org

Thoughts on…. Politics & Artificial Intelligence

Firstly, I’d like to draw your attention to an article written by my newspaper of choice (The Independent) entitled “Advances in artificial intelligence could lead to mass unemployment, warns experts.” This particular article was highlighted to me by my good friend Alex Blok.

It pains me that people are probably going to be pulled in to believing that artificial intelligence will only lead to mass unemployment. It simply is not necessarily the case! Before I start my post properly, I’d just like to highlight that I’m not an economist, but I am quite passionate and hopefully quite knowledgeable about both artificial intelligence and politics.

Firstly, humanity has been innovating ever since we’ve been Homo Sapiens. Innovation can be defined as finding new or better solutions for problems we encounter. One of the biggest problems innovation has attempted to solve is problematic health & safety when working. The wheel allowed one person to push a heavy object, when four people would have had to lift it previously. The wheel also lead into innovations such as pulleys. The industrial era attempted to simplify peoples jobs by providing automation, it then also gradually improved health and safety in those factories. So, the assembly line simplified the process of people putting together things (e.g. vehicles and electronic items) - eliminating some of the dangers, and many repetitions of doing things by hand. Each of these innovations, arguably, caused some unemployment (but not mass unemployment). At the same time it, arguably, allows for different jobs to be created.

Automation allows for the simplification of processing, which directly leads to a “freeing up” of costs. This single fact often means that positions in a business are no longer required, and the people in those positions are released - aiding in the “freeing up” of costs. There are now at least four choices about where this freed-up wealth now goes, (1a) it goes on creating new jobs within the business, or (1b) new avenues of business, (2) it goes to philanthropic projects, (3) it goes into paying off debt early, or (4) it goes into the pockets of the management of the business as they’ve been “clever” enough to employ such a solution.

I suspect that in contemporary society, with its ever increasingly capitalist stance it goes more into option (4) and option (3) than the other options (although there does seem to be some hint towards (1b) and (2) but to a much lesser degree).

Now we come to Artificial Intelligence. We’ve been employing Artificial Intelligence techniques ever since about the mid-1900s, where simple AI techniques allow for automated route discovery, automated pattern finding, automated quality assurance, speech-to-text assistance for the visually impaired, etc etc. There will continue to be advances in Artificial Intelligence which simplify human life. Whats different now to allow for such an unemployment worry? Partly, it is more widely known about, and this is thanks to the general public become a bit more technology-savvy and providing greater funds to technology businesses. Another potential reason for such a worry could be that the technological singularity is a possibility within the next 1 to 100 years (there are a variety of speculations), but I think this is a lesser reason for such a unemployment worry, and is more of a problem to existential risk if global unfriendly AI were to be created (but that is a completely different topic).

What needs to happen?

I think that the Future of Humanity Institute at the University of Oxford is correct that we need to start thinking about the risks which artificial intelligence imposes. Particularly as evolutionary algorithms are at such a stage that they could self-evolve at a greater pace than society can cope with. This risk research needs to feed directly into local, national and international governments which are going to have to change rather rapidly. We must keep in mind that freed-up wealth, instead of being fed into the pockets of business owners (or even authoritarian governments), could (and should!) be shared out into making humanity better - allowing for new/different jobs, increased quality of education and research, better health for all of humanity, genuine ecological improvements that are sustainable, and allowing for creativity within humanity to encounter new problems and create new innovations to solve those problems. We must do this with freedom, equality and community in mind.

So in summary. AI, like any other innovation, is not really a problem but a solution. What could be a problem however is the management of those solutions including corporate bosses, politicians and media. We need to collectively find solutions - Collectively being the whole of the community: whether employed, unemployed, management, politician or journalist. Hysteria and panic are not the way forward. Careful analysis and genuine support for humanity is the way forward.

 

Computational Intelligence Unconference UK 2014 - Announcement

Hi all,

I wanted to let you all know about an event that I am co-organising. Its an unconference (as some of you know I’ve organised unconferences before), on “Computational Intelligence”, in London (UK), on 26th July 2014. If you can be in the area on that date, then pop over to the CI Unconference UK 2014 website and get yourself a ticket. It’ll be an great day, full of wonderful talks and ideas, and lots of interesting people with different experiences.

More details below…

Daniel

Computational Intelligence Unconference UK 2014
BT Centre, London, UK
26th July 2014

Computational Intelligence Unconference UK 2014 is looking for attendees and speakers at the event on 26th July 2014. The unconference will bring people together with similar interests for networking and talks about Computational Intelligence and Intelligent Systems. The unconference will be held at the BT Centre in the St Paul’s area of London, England.

Free tickets and more information are available from the website. Space is limited, so get your free tickets as soon as you can from our website:

https://ciunconference.org/uk/2014/

The event is an “unconference”, which is an informal self-organising meeting with free (ticketed) entry, quite unlike a standard conference. An unconference is attendee-run, if you submit an idea you’ll get a slot in a first-come-first-served-basis timetable to talk about what you like, providing it is relevant to the general topic of Computational Intelligence.

This particular unconference will be suited to those people who use, or have an interest in, Computational Intelligence. Talks will have an element of theory and/or application. Topics include:

  • Fuzzy Set Theory and Fuzzy Logic,
  • Artificial Neural Networks,
  • Evolutionary Computing and Genetic Algorithms,
  • Connectionist Systems,
  • Autonomous Mental Development,
  • Hybrid Intelligent Systems,
  • Artificial General Intelligence,
  • Applications (e.g., Finance, Government, Telecommunications, Security, Open Data, Social Networking)

Organisers:

  • Daniel Lewis, University of Bristol
  • Stephen G Matthews, University of Bristol

Thoughts on… rationality

I’ve recently become quite interested in the idea of the technological singularity, which is basically where artificial intelligence becomes more intelligent than human intelligence. What form this takes, and how we get there is not known, but it is not uncomprehendable that we accidentally or purposefully build an artificial general intelligence which evolves itself beyond the level of its creators intelligence.

That aside, I have watched a few of the talks from the Singularity Summit of 2012, and stumbled across one talk by Julia Galef (of CFAR) on “Rationality and the Future“. Rationality is an important on its own, but it has a special relationship with singularity theory. It seems to me (and those of you in this particular field, please do feel free to correct me), that rationality is important in singularity theory for the following reasons:

  1. Machines are programmed to be rational. Programming languages are based on mathematics - such as algebra, calculus, geometry and proof. It is this “proof” theory which allows us to test, and be confident that an algorithm (or whole software) will act in a certain way.
  2. Rationality allows us to define beliefs, intentions and desires (BDI). As humans, this has, or at least should have, an implication on the decisions we make and the actions we perform thereafter. The same stands for an artificial intelligence - in machine learning algorithms the results may or may not match up with reality or even rationality, and those decisions will lead into action for an intelligent agent. PEAS (Performance measure, Environment,. Actuators, Sensors) theory also comes to mind.
  3. Also from what I’ve seen in singularity topics, there is plenty of opinion. Some opinion is based on reasonable speculation, and some is based on pure guesswork. (Although it sounds as if expert opinion and non-expert opinion when it comes to singularity is somewhat similar in its estimations on when singularity will occur. See talk on How we’re predicting AI by Stuart Armstrong). This means that rational thinking is essential for humans to sort through the strong theories, and the weak theories. Having assumptions is something necessary as we don’t know everything, and those things that we do know exhibit levels of uncertainty and vagueness, but the important thing is to actually specify for any particular statement that you are taking such an assumption.

So the problem with the above is that almost every human is at least sometimes irrational. There are very few people that are able to live completely rationally. Uncertainties and vagueness permeates our understandings and our communications, not to mention that we do things wrong because of physical limitations (temporal or permanent). This is not necessarily always a bad thing - for example, when we fall in love (could be with a person, or a place), we might have our reasons for falling in love, but these reasons might not necessarily match up with science and mathematics, if they do then scientific and mathematical reasoning is not necessarily at the front of the mind of the human.

The talk by Galef, mentioned (and I am paraphrasing here) that one of her students came to her saying that he did not know whether to move away from family and friends in order to take a much higher paid job. To which Galef rephrased the problematic decision to being if you were already in that job, would you take a very big pay cut in order to move back to your family and friends. To which the answer was apparently “no”. Galef said that this rephrasing of the decision got around the problem of the Status Quo, in that people prefer to stay in a situation than move from it - even if it is the irrational option.

It is a good example, and rephrasing a decision can allow for more reasonable decision making. It also depends on how much we hone in on one or the other forms of decision. For example, in the decision about moving for a job, there could be an element of risk involved - the what-if’s could creep in, for example what if I don’t make friends, what if I lose the job, what if I am not comfortable in the place where I live. The level of risk might be too much for a rational move. In other words the level of risk is greater than the level of pay increase. Likewise risk can creep in to the inverse - if I stay where I am, then what if I lose my job, what if I lose my friends or upset my family, and what happens if my environment changes dramatically. The level of risk might be too much for a rational stay. We could also go into much more depth of reasoning, and actually give value to staying or going. This is turning the irrational into the rational… but do we always need to go into such depths of reasoning? Particularly as we’re sometimes irrational anyway, can we not hone our decisions without becoming so rational?

At the moment I don’t know the answer to this final question, or even know whether it is very important. What I do know is that this irrationality, or at least just the uncertainty and vagueness, is the reason why I became involved in and continue to be interested in Fuzzy Set Theory and Fuzzy Logic. Fuzzy attempts to model these shades of grey, allows for them to be reasoned, and does not have the requirement of definitive input or output. Probability theory is another area which helps with uncertainties, and I am very convinced that there is use for Fuzzy Probabilities and Possibility theory in Artificial Intelligence. Particularly if we combine such reasoning systems with knowledge bases (and that is where my knowledge of Semantic Web / Linked Data and Databases comes in handy).

These are just my initial thoughts on rationality for this blog, as I go along in my research into fuzzy theory and artificial intelligence I’m sure I’ll have more. Plus, I’m sure they’ll develop the more I consider singularity too.

Please feel free to comment.