Pages

Week 1: Representation as a Fluent


[Notes as I wrote them down during the meeting]

We read: Representation as a Fluent: An AI Challenge for the Next Half Century (http://dream.inf.ed.ac.uk/projects/dor/papers/bundyieee06.pdf)

AM: ORS is good but might be too specific.

Extreme: having to represent things (everything) as they are going on.  Many minor things happening at the same time.  When to communicate a new concept to repair what we think about it?  How do we measure how similar concepts are?  Grounding in real world.  Ontologies are the right representation of things in the real world?  Is ontology repair really a suitable mechanism?  Real world stuff changes so often.  Ontology stuff works better in restricted domains.

In the paper: doesn't explain real world requirements or give a use case.

But MA likes the idea of it in theory.

PP - evolution through repair.  Own ontology gets facts as it interacts with other ontologies.  Figures out contradictions.  Doesn't think it would scale very well.

Core issue: we (humans) describe ontologies based on the real world to start with, but then algorithms make the changes, based on other ontologies - moving away from real world.  MA says for practical purposes, won't really work.

PP issue: paper has main ontology that tries to represent everything at once.  Would be better to take SW approach where lots of separate ontologies link to each other.

MA: the more you impose on a specific representation, the less power it has.  If you could cycle between general and specific ... if the process of creating the representation could change as the information changed.  PP: difficult to define the process of how to change the representation.

Simple, lightweight ontologies are much more widely used.  Connecting them over SW can still represent lots of stuff, but can lose reasoning power.  Trade-off between reasoning and ease of use.

Would work better if ontologies remain individually powerful.  PP used protege to do inferencing for MSc, but used loads of CPU power... hard to reason over big ontologies, or whole SW.

Can't see the link between example and ORS.

MA: Paper says: world is too complicated to specify all preconditions and postconditions.  How does this relate to ontologies?  Ontologies have infinite dimensions?  What is the dimensionality of an ontology?  The set of things a language can express = infinite.  PP: If a concept has only one attribute, 1 dimensional.  So unconstrained in that respect.

MA: Be interesting for them to rewrite paper purely in terms of ontologies and see if it still makes sense.

PP: Questions - when to evolve?  When there is an inconsistency.  Problems with causing inconsistencies when you make changes.  That's where repair comes in.  If you make an evolutionary step that doesn't allow inconsistencies, that's very restrictive.  Evolve, check then repair is the other option.  (Lots of work on this, but not actually used anywhere; when ontologies are used, they don't want to rely on unreliable repair systems).

Future governement database of onotological data about people.
Probablistic representation of 'should be arrested'.  Would you really trust an algorithm to make that decision?  (PP: depends on your political views..)

MA: Using neural networks for a bank to decide whether to give credit to somebody.  Put in all factors about credit rating, amount of money etc.  Legally they have to say why they're not going to give credit for borderline cases, difficult with just graphs of numbers.  In machine learning, something is being optimised for.  People have different ideas of how to treat borderline cases in the real world.  PP: bringing in legal system makes an even bigger mess.

Italian seismologists - if a system had given them that information automatically, whose fault is it if it was wrong?  Blame the people who made the system?  Normal people expect it to be right.  But it's just like a magic8 ball.  PP: still expect the human expert to make the final call.

Two systems: one that works out probabilities and one that looks at probabilities and gives you an answer.  The latter is just what the human expert does.  But also looks at big picture and reassesses whole situation.

Brain scan stuff - making decisions based on equipment that could be wrong.  Can't possibly know the real state.  Not their fault if they make a bad decision.

PP: We have to make it work first!  Before we worry about the ethical stuff..  So far what people expect is that if you make a technology, it should work.  But experts have to trust the data from their systems, what other choice do they have.

What are each of us using ontologies for?  MA: still thinking of them in an abstract way, not tied to technology.  AG and PP: rdf, sw stuff, lightweight ones.

MA: Re: some paper*.. create an ontology of scenes of meta actions (made up of smaller actions).  eg. drinking tea - concepts of tea, person drinking, domain, etc.  Surveillance camera looking at guy dragging a bag; works out bag, person, direction etc.  Purpose of ontology is to give overall idea of what's going on.  So when there are unexpected events, can compare to stock prespecified events, and work out what's what; if it's a new event, or part of another event.  Cognitive architecture called ACTOR.  Something like Hominae(?). Something called Scone?  MA can't remember..  Gives a framework over wordnet and framenet knowledgebases to link stuff that's seen to a wider context.  Reasoning is outside the ontology; ontology just represents.  MA didn't like it because it was static, but likes the idea of serving disambiguation purpose.

* Using ontologies in a cognitive grounded system: automatic action recognition in video surveillance.  Alessandro Oltramari & Christian Lebiere (CMU)

Also talks about it as component of a larger system, without describing the larger system.. (PP: it's a puzzle more than a paper...)

Framenet is like wordnet but creates lots of semantic frames which are structures given a name, something like a person with concepts attached to them (called frames).  Philosphical idea.  Also defines actions (like flying a plane, drinking tea). (https://framenet.icsi.berkeley.edu/fndrupal/about)

Ontologies more used for representation than reasoning.  Combining two ontologies with 200 axioms each = more than 400.  I want to manufacture two ontologies and fit them together to see.  PP hasn't seen reasoning used more than just sub- and super-classing (AG: and ranges and domain).

MA: Doctors afraid to say something that might be wrong because they'd be liable.  Relying on expert systems.  A lot of it is memorising things - flowcharts to make decisions.  Does it mean they're less liable because they're relying on a system?  Surely as the expert they should be able to judge if there's something wrong with an outcome?  Use it to back up your own judgements instead.  Expert systems are good because they explain their reasoning.  MA: programmers more willing to _not_ trust the system because we know how fragile they are - people without training more likely to trust the system and less willing to question it.  PP: probabilities should be weighted by the doctor.  Expert system should expose all options to the doctor and let them decide, not hide anything.  Especially borderline cases.

Different peoples' mentalities affect how they view probabilities.  "It'll never happen to me", "It always happens to me".  As a result, their opinions of outcomes of systems will be varied if outcomes are right or wrong.

Agents are a move towards a certain kind of software engineering.  Many things could be described as agents.  I only know about agents as an abstract concept, not how you'd program one in practice.  I need to learn more.  Need lots of services for a multi-agent system.  MA: Programs do it for free, agents do it for money.  Can't make them do things they don't want to.

NEXT WEEK: Pick from MA's list.



Unrelated:

MA invented 'social networking editor' job and got paid for it at his last uni.  Wrote a big report about what other unis were doing.

In 50 years antibiotics might not work at all.

No comments

Post a Comment

© Ontologies reading group for first year PhDs in CISA
Design:Maira Gall.