|
The current LifeNet Inference is written in C++ and
implements a Temporal Pairwise Markov Random Field (TMRF2).
The root inference graph contains more than 100,000
commonsense assertions about the world that allow temporal
inference about the state of the world over time. The API
currently exports XML-RPC, Python, and C interfaces. The
library compiles for Linux, Mac OS X, and Windows
(Cygwin).
Python
|
The XML-RPC server is written in python, which wraps the
C++ library and exports the functions on port
8056 . A few XML-RPC client samples are
provided in the python directory to demonstrate using both
the base commonsense inference model as well as custom
user-defined LifeNet inference models.
XML-RPC
|
First, the XML-RPC server needs to be running so that
the client can connect on port 8056 and make use of the
exported API.
Server
|
The LifeNet server must be executed from the working
directory, 'trunk/apps/lifenet/inference '
and the command is
'python/lifenetserver.py '. When the server
is launched successfully the output will be something
similar to the following.
1cc-dhcp-90:~/Desktop/trunk/apps/lifenet/inference neptune$ python/lifenetserver.py
Initializing Lifenet
Reading LifeNet file: ../lifenet-graph.lisp
Parsing LifeNet links
loaded LifeNet with 96377 links (30191 prior and 66186 temporal) connecting 18125 nodes.
26216 duplicate edges ignored (21.3846% of total edges).
Calculating default data.
TMRF2: inferring missing data iteration 1
18125 node probabilities inferred
TMRF2: inferring missing data iteration 2
18125 node probabilities inferred
TMRF2: inferring missing data iteration 3
9391 node probabilities inferred
TMRF2: inferring missing data iteration 4
3069 node probabilities inferred
TMRF2: inferring missing data iteration 5
17 node probabilities inferred
TMRF2: inferring missing data iteration 6
2 node probabilities inferred
TMRF2: inferring missing data iteration 7
0 node probabilities inferred
Opening save files
serving on port 8056
Serving
|
Once the server is launched, it has performed the
initial default inference for the world state given no
known data. This initial inference that the server does
at startup speeds up later client invoked inferences by
providing a good starting state for subsequent belief
propogation.
|
Client
|
A few examples of LifeNet clients are provided in the
python directory,
'trunk/apps/lifenet/inference/python '. Two
good client test programs to look through are
'test_commonsense.py ' and
'test_expert.py '.
test_commonsense.py
|
First, we set up the XML-RPC server. In python,
this couldn't be easier. The server is set to use
port 8056 by default. We name the server
'lifenet ' and we can access all LifeNet
functionality through this python object.
import sys
from xmlrpclib import Server
lifenet = Server("http://localhost:%d" % 8056)
|
|
Now that the connection has been established to the
already running server, we ask LifeNet for the integer
ID of the "Root Graph", the graph that contains all of
the Openmind Commonsense knowledge. This script uses
the test data below to infer what will be true next
using this graph.
root_graph = lifenet.get_root_graph() # stores most general commonsense model
|
|
We create our Data Objects that store the probability
of the truth of statements about the world. Statements
in LifeNet exist as primarily English human language
phrases that can be true or false.
# we'll use this data object to store the default state of LifeNet
default_data = lifenet.data_new(4)
# allocate two more data objects with 4 time slices each
known_data = lifenet.data_new(4)
inferred_data = lifenet.data_new(4)
|
|
Now that we have allocated our Data Objects on the
server, we perform inference using our empty
known_data object and store the inferred
results in our default_data object. This
allows us to later compare our subsequent inferences to
this "base state" of the world, given that we know
nothing.
# infer the default_data state of lifenet with no specific known_data
lifenet.infer(root_graph, known_data, default_data)
|
|
We now need to set up our knowledge data for the test
inference that we will perform with the LifeNet
Commonsense model. We have allocated our data objects
to consist of 4 time slices each, so we will want to set
certain assertions to be true and others to be false
within these time slices. The format for specifying
this knowledge consists of a slice number followed by a
list of assertion probability pairs. The following code
means that in the first slice, slice zero, there is a
50% chance that 'buy a house' is true and a 100% chance
that 'spend money' is true. Also, in the second time
slice, slice one, we have specified that assertions
containing the term 'high wire' are absolutely false.
All of the remaining assertions within the model will be
inferred by belief propogation.
result = lifenet.data_add_slices(known_data, root_graph, """
(0 ('buy a house' 0.5) ('spend money' 1.0))
(1 ('high wire' 0.0))
""")
if result != 0:
print "Error adding data to known_data."
sys.exit(-1)
|
|
In order to see exactly what is inside our
'known_data ' data object, we can ask for
the contents and then print them to the screen. This
causes the internal server contents to be communicated
over the XML-RPC connection to the client.
# display contents of known_data object
print "known_data:"
print lifenet.data_get_contents(known_data)
|
|
Finally, we can invoke the inference using the "Root
Graph" containing all of the LifeNet Commonsense
assertions and relations by providing our
'known_data ' data object and our empty
'inferred_data ' data object, which will be
filled with all of the previously unknown assertions
about the LifeNet model.
lifenet.infer(root_graph, known_data, inferred_data)
|
|
Now, in order to compare what we have inferred from
what is "normally" the case, we will subtract our
'default_data ' data object from our
'inferred_data ' data object. The result
from this subtraction is placed back into our inferred
data object. This tells us how much more or less
probable all assertions are given the knowledge that we
have provided above.
# substract default probability state from inferred state to get the inferred probable difference
lifenet.data_subtract(inferred_data, default_data)
|
|
Now to view the results and to provide a sanity check
for the model, we take the top 4 assertions from each
data slice and print them to the screen.
# display 4 nodes with largest positive difference from default_data states
print lifenet.data_get_top_n_contents(inferred_data, 4)
|
|
Because the XML-RPC layer does not currently provide
garbage collection for user data or custom LifeNet
objects, the server-side data allocations must either be
freed, or the server must periodically be restarted.
# cleanup server resources
# (garbage collection should handle this if we use local data object destructor functions)
lifenet.data_delete(default_data)
lifenet.data_delete(known_data)
lifenet.data_delete(inferred_data)
|
|
Output from the LifeNet Commonsense Inference client
XML-RPC script should be similar to the following:
1cc-dhcp-90:~/Desktop/trunk/apps/lifenet/inference neptune$ python/test_commonsense.py
All of known_data:
[['I buy a house'], ['I am on the high wire'], [[1.0], [0.0]]]
Top 4 assertions for each slice of inferred_data:
[['I live space', 'I contact a real estate agent', 'I deal with a real estate agent', 'I talk to a property agent'], ['I am on a blackboard', 'I see an employer', 'I see cent', 'I see a motive'], [[0.400501, 0.400501, 0.39200099999999999, 0.376], [0.24839426017753613, 0.24836529819857411, 0.24835808675681736, 0.24828859739465237]]]
|
|
test_expert.py
|
import sys
from xmlrpclib import Server
lifenet = Server("http://localhost:%d" % 8056)
root_graph = lifenet.get_root_graph() # stores most general commonsense model
my_graph = lifenet.new_graph() # will represent an expert LifeNet model
print "setting up expert graph"
# -> means same time slice
# => means from present slice to future slice
result = lifenet.add_edges(my_graph, """
(=> 'have money' 'buy car' 0.5)
(=> 'own car' 'have money' -0.5)
(=> 'own car' 'buy car' -0.8)
(-> 'buy car' 'own car' 0.5)
(-> 'own car' 'have money' -0.6)
(-> 'own car' 'drive car' 1.0)
(-> 'drive car' 'crash car' 0.1)
(-> 'crash car' 'have money' -0.8)
(-> 'drive car' 'get ticket' 0.3)
(-> 'drive car' 'buy car' -0.6)
(-> 'get ticket' 'own car' -0.8)""")
if result != 0:
print "Error adding edges to my_graph."
sys.exit(-1)
print "setting up expert test data"
my_default_knowledge = lifenet.data_new(4)
my_inference = lifenet.data_new(4)
my_knowledge = lifenet.data_new(4)
# infers the default knowledge using my_knowledge, which contains nothing right now.
lifenet.infer(my_graph, my_knowledge, my_default_knowledge)
# slice-num format : 0, 1, 2, ...
# node-value-pair format: (node-tag truth-state)
# slice format : (slice-num node-value-pair node-value-pair ...)
result = lifenet.data_add_slices(my_knowledge, my_graph, """
(0 ('have money' 1) ('own car' 0) ('crash car' 0) ('drive car' 0) ('get ticket' 0))
(1 ('have money' 0))
(2 ('have money' 0))
(3 ('crash car' 1) ('have money' 0))""")
if result != 0:
print "Error adding data to my_knowledge."
sys.exit(-1)
# infers the default knowledge using my_knowledge, which contains nothing right now.
lifenet.infer(my_graph, my_knowledge, my_inference)
print lifenet.data_get_top_n_contents(my_inference, 3)
# delete data (not necessary if server is shut down periodically)
lifenet.data_delete(my_default_knowledge)
lifenet.data_delete(my_inference)
lifenet.data_delete(my_knowledge)
|
|
setting up expert graph
setting up expert test data
[['buy car'], ['crash car', 'get ticket', 'buy car'], ['get ticket', 'buy car', 'drive car'], ['buy car', 'own car', 'drive car'], [[0.021093750000000001], [0.10125000000000001, 0.078750000000000001, 0.0028125000000000003], [0.078140835924363175, 0.0028614775930847001, 0.0], [0.022325857011453575, 0.0087071276595022656, 0.0]]]
|
|
|
|
|
|