Keynotes Visual Mining Interpreting Image and Video Stefan Rüger Professor Knowledge Media Institute, The Open University, UK Like text mining, visual media mining tries to make sense of the world through algorithms - albeit by analysing pixels instead of words. This talk highlights recent important technical advances in automated media understanding, which has a variety of applications ranging from machines emulating the human aesthetic judgment of photographs to typical visual mining tasks such as analysing food images. Highlighted techniques include near-duplicate detection, multimedia indexing and the role of machine learning. While the first two enable visual search engines so that, e.g., a snapshot of a smart-phone alone links the real world to databases with information about it, machine learning ultimately is the key to endowing machines with human capabilities of recognition and interpretation. The talk will end by looking into the crystal ball exploring what machines might learn from automatically analysing tens of thousands of hours of TV footage. Speaker Bio Prof Stefan Rüger read Physics at Freie Universität Berlin and gained his PhD in Computer Science at Technische Universität Berlin (1996) on the Theory of Neural Networks. During the next decade he carved out his academic career from postdoc to Reader researching multimedia information retrieval at Imperial College London, University of London, where he also held an EPSRC Advanced Research Fellowship (1999-2004). In 2006 Stefan became a Professor of Knowledge Media when he joined The Open University s Knowledge Media Institute to cover the area of Multimedia and Information Systems. He currently holds an Honorary Professorship from the University of Waikato, New Zealand, and has held Visiting Fellowships at Imperial College London and Cranfield University, UK. Stefan is interested in the intellectual challenge of visual processing with a view to automated multimedia understanding. xxviii
Answering Structured Queries and Natural language Questions on RDF Knolwedge e Bases and Their History Carlo Zaniolo Professor UCLA, USA There is now a cornucopia of rich knowledgee bases including those of encyclopedic scope, such as DBpedia harvested from Wikipedia, and those focusing on specialized knowledge domains, such as the medical information domain. However, novel user-friendly systems and interfaces that can answer NL questions and complex structured queries on KBs and theirr history are now needed to realize the full potential of this cornucopia. In this presentation, I'll first describe in some depth the design of our Canali and SWiPE systems developed to answer these needs. Then, I will turn to the difficult problem of managingg KB histories that document both changes in the external world they describe and by internal KB revisions and corrections. In order to address this problem successfully, semantic web applications will require a more flexible framework for temporal-reasoning than that proposed by the recent temporal SQL standards. Speaker Bio Carlo Zaniolo was born in Vicenza, Italy. He received an E.E. Engineerr degree at Padua University in 1968, and M.S. and Ph.D. degrees in Computer Science at UCLA in 1970 and 1976, respectively. After working at Bell Laboratories, Murray Hill, NJ, and MCC in Austin Texas, Dr. Zanioloo joined the UCLA CS Department in 1991, and was awarded the N.E. Friedmann Chair in Knowledge Science. Dr. Zaniolo s interests include database systems, knowledge base systems, non-monotonic reasoning, spatio/temporal reasoning, internet information systems, Dataa Mining. xxix
Semantic Computing for Analytics Lessons from Automotive Industry Nestor Rychtyckyj Technical Expert, Ford Motor Company, USA One of the huge oncoming challenges for information technology is the critical need for analytics that can make sense of big data. The diversity, heterogeneity and sheer volume of data along with inherent errors and gaps require new solutions that utilize intelligence and context to help solve this problem. Semantic computing and its related technologies is one such solution. In this talk I will discuss how the application of semantic technologies is making a significant difference in how analytics can be used to make sense of big data. Technologies such as natural language processing, ontologies, text analytics, machine translation, machine learning and others are being utilized to help understand data and use that knowledge to solve real-world problems. The automotive industry has a wealth of data that cannot be understood easily and analyzed without adding intelligence and context to the process. I will discuss how semantic computing is currently being used in a variety of different automotive domains including manufacturing, vehicle assembly and quality. More importantly, we will look at the current trends in semantic computing and try to see where they will lead to in the future. Speaker Bio Nestor Rychtyckyj is a technical specialist in Artificial Intelligence at Ford Motor Company in Dearborn, Michigan. His primary role focuses on the application of AI-based systems for vehicle assembly process planning, quality, ergonomics for the plant floor, machine translation and other automotive applications. He is responsible for the development and deployment of semantic technologies at Ford and has developed domain ontologies and applications that demonstrate the advantages of this approach. He has lectured about the use of semantic computing at internal forums such as the Ford Analytics Conference and external events such as the IEEE Southeast Michigan conference. Nestor received his Ph.D. in computer science from Wayne State University in Detroit, Michigan. Dr. Rychtyckyj is also an adjunct xxx
professor at Lawrence Technological University and a senior member of both AAAI (Association for the Advancement of Artificial Intelligence) and IEEE and a member of ACM (Association for Computing Machinery). He chaired the Innovative Applications of Artificial Intelligence (IAAI) conference in 2010 and was co-chair for the Cybernetics at the IEEE Systems, Man & Cybernetics conference in 2011. He has published more than 30 papers and has presented his work at various conferences including IAAI (Innovative Applications of Artificial Intelligence) and AMTA (Association for Machine Translation in the Americas). xxxi
Standards in IoT and the Role of Semantics Atsushi Kitazawa General Manager 4 th Platform Software, NEC Solution Innovators, Japan Chief Engineer Cloud Platform Division NEC Corporation, Japan While the number of interconnected things is constantly growing, many new opportunities of taking advantage of the information generated by those things are emerging, starting a whole new era of the information age. The main question these days is how to best make use of this abundance of information. How can we extract meaning from the generated data, and how can we make sense out of data for humans or automate decisions [1]? This is where semantics come into the picture of the IoT landscape in standardization. In this talk, we start by an overview of IoT from the history of old-age M2M to a review of the latest proliferation of connected things. We then report on our experience in one of the smart city project in Europe from a commercial point of view, and we give an overview of the IoT standards landscape, which is currently highly fragmented and heterogeneous. Despite this heterogeneity, we see a commonality especially by looking at it from the "semantic point of view, where semantic is defined as meaning, "context", intention [2]. We will explain how the OMA NGSI Context Management(OMA/CM) standard makes use of contextualized date; focusing on the use case scenarios of (1) open data in a smart city project where meaning is added to measured values in terms of metadata, (2) a smart display which actively responds to the current context, and (3) a dynamic camera device discovery mechanism based on a given context query by a police officer. We conclude by demonstrating the feasibility of semantics in IoT standards, showing an interworking showcase between OMA and onem2m where ontology plays an important role by matching each elements of metadata derived from each standards. [1] Top 10 Technology Trends Impacting Information Infrastructure, 2013, Gartner [2] http://www.scconsortium.org/ xxxii
Speaker Bio Mr. Atsushi Kitazawa has been working on the Information Management(IM) for over 30 years primarily developing database management software in NEC Corporation. Meanwhile he has been working for telecom industry implementing standard protocols for core networks as well as service delivery networks. He is now leading a division in charge of big data in NEC Solution Innovators and extending its reach to IoT. He is leading IM technology in NEC as a Chief Engineer of IM Products. xxxiii