Semantic Web / Web 3.0

In my vision, the Web 3.0, is a web of information rather than a web of document. Currently, search engines, content management systems are rather document/article oriented. I believe that the shift from the current web to a more semantic web, will not arrive as a revolution but will be a slow migration (which has already started). The process of transforming data into information is a particular form data migration

Buisiness Intelligence / Enterprise 3.0

The goal of the so called “Business Intelligence” is to transform data into usable strategic business information. I strongly believe that this domain is at the crossing of the technologies used for data integration on one side (reformat the data into a standardized form) and semantic web on the other side (apply intelligent reasoning to such data).

Architecture and design of integrated systems / SOA / EDA


At Nirva Systems my main role is IT Applications Architect. I actively participate in the development of new components for the Nirva Application platform. In particular, I am currenlty working on a event based message management component. I have also highly participated in the design and architecture of the applications developped by Nirva. In particular I am the product manager of Nirva’s Track and Trace application. I am also one of the major pre-sales resources.

Data transformation and integration / MDM


I have a very strong expertise in XML related technologies such as XPath, XSLT. I have given many courses on these topics. In my position a Nirva Systems, we made a very strong use of XSLT and all its possibilities (eg template modes, the xsl:key construct, etc.)

I understand both the theory and practice of mapping between database schemas. In particular, at the University of Rome, I worked on using machine learning to build mappings between a database and an ontology

Machine learning data extraction patterns


I have worked generalization techniques to learn patterns for information extraction. In the context of my thesis, the learned patterns took the form of simple regular expressions. Later on, I developed a dynamic programming based algorithm to generalize XPath-type queries

Automating data processing

One of the objectives of my Phd thesis was to search for solutions to automate the access to online web sites

In particular, I developped WebSource, a tool which allows to “run” abstract descriptions of an web site extraction process


  • I am a fan of the Debian GNU/Linux system especially its packaging systems
  • I have migrated my desktop computers to Ubuntu and am very satisfied (as a user) with the progress made in the user’s experience using Linux
  • I have used Linux for many years (in particular the current web server is running under Linux)

Leave a Reply