AITAml challenge - Competition
The workshop series on Artificial Intelligence Techniques for Ambient Intelligence (AITAmI) provides the opportunity to understand latest developments and takes action to shape the future of the Intelligent Environments (IE) area by gathering researchers in a variety of AI subfields together with representatives of commercial interests to explore the technology and applications for ambient intelligence. Among the goals of the workshop, the definition of benchmarks for data, methods and competitions in IE research area would constitute a basic step in order to elicit “gold standards” for years to come. To further foster such goal, the AITAmI workshop proposes the first Intelligent Environments Challenge to encourage advances in the realization of Intelligent Environments looking for suitable solutions in specific real world application areas such as homes, offices, classrooms, public spaces, etc.
THE COMPETITION
The first edition of the competition will propose two benchmark environments: a Smart Home and a Smart Classroom. In this regard, the AITAmI Challenge Steering Committee (in collaboration with the Intelligent Environments local organization) will set up two concrete reference test environments.
The challenge will be performed in the reference test environments proposing a set of scenarios in which users are supposed to perform tasks and each IE system participating in the challenge is requested to support the users while achieving them.
The competition will be revised over the years to become more advanced and function as an overall quality measurement in desired areas. Assessment measurement will be based on a score derived from evaluation by users and referees selected among the competition steering committee.
THE EVALUATION
Each participating system will be assessed according to different variables. A system will be evaluated according to its ability to use suitable AI techniques to implement intelligent behaviors. Then, performance of the system to successfully achieve proposed tasks will be considered. In this regard, a mixed evaluation mechanism entailing quantitative and qualitative features will be setup.
During the first edition of the challenge, each system will be assessed according to its ability to be:
a) intelligent to recognize a situation where the system can help a user. The system is to monitor both user behaviors and environment status aiming to detect situations in which the system can provide a suitable support. Then, the system should understand whether its help might be needed.
b) sensible to recognize when the system is allowed to offer help. In case there is an opportunity, the system is to decide if and how to intervene to help/support a user. Then, it should reason about when and how to interact with the user.
c) able to deliver help according to the needs and preferences of those which is helping. The system is to support/facilitate users with proper actions. Then, it should effectively act in order to provide support.
A referee (a member of the Steering Committee) will observe the system behaviors during the contest and, then, will provide a score for each participating system. After their interaction experience, also users will provide a score for each system. Then, the system with best scores will be nominated the winner of the competition. The results of the competition will be communicated during the AITAmI workshop. Only winners will be announced. All the other participants to the challenge will receive private feedback on the assessment.
A more detailed description of the evaluation process will be provided soon.
THE SMART HOME SCENARIOS
A Smart Home benchmarking environment will be setup by means of an easy-to-install tool that provides smart home capabilities: the CASAS architecture. More technical details may be found in the following references:
- Cook, D.J.; Crandall, A.S.; Thomas, B.L.; Krishnan, N.C., “CASAS: A Smart Home in a Box", Computer, vol.46, no.7, pp.62, 69, July 2013.
(Open access link: http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3886862/)
- Crandall, A.S.; Cook, D.J.; Smart Home in a Box: A Large Scale Smart Home Deployment.
In workshop proceedings of the 8th International Conference on Intelligent Environments. AITAmI 2012.
(Open access link: http://www.eecs.wsu.edu/~cook/pubs/ie12.3.pdf)
In each scenario, a user will be supposed to perform a set of tasks, a participating system will be expected to support/help/facilitate the user and a referee will observe the behaviors of the system. After each trial, both user and referee will evaluate the system. The evaluation will be the composition of evaluation reports provided by the referee and by the user. Namely, the evaluation reports will be filled by: the user, on the basis of her experience; by the referee, on the basis of the system features and the observed behaviors. The system that is evaluated the best, is the winner of the competition.
Here, a set of possible reference scenarios for the smart home benchmark environment is provided. For the competition, actual scenarios will be provided and described with more details later after participant selection.
Scenario 1: Managing personal agenda
An IE system is supposed to monitor the user habits while looking at the TV.
The user is supposed to receive a message from a friend with a request to join a piano concert at the city theatre the following week. Two options are available: Sunday at 5pm or Wednesday at 9pm.
The user interacts with the system to schedule the event on Sunday:
Checking the habits of the user, the system detects that the user is usually looking the TV on Sunday afternoon at 5pm.
The system may react as follows:
- Ask the user whether there is a preferred TV show on Sunday.
- Inform the user of a possible preferred TV show and asks whether the user would have it recorded.
- Inform the user of a possible preferred TV show and propose to schedule on wednesday the piano concert.
The system is supposed to:
- Be aware of user habits about TV shows.
- Manage the schedule of event in the user's personal agenda.
- Be able to reason and propose alternatives
Scenario 2: Detection of risky situations
An IE system is supposed to monitor the status of the house aiming to check for risky situations.
The user is cooking (stove are on) and leaves the kitchen. Supposing she forgot the stove, the user opens the main door to leave the house.
Checking suitable sensors deployed in the house, the system detects the user is leaving the apartment and stove are still on.
The system may react as follows:
- Interact with the user to investigate whether she really would leave the home.
- Inform the user about the possible risk of leaving the stove on.
- Inform the user about the possible risk of leaving stove on and switch off the stove.
The system is supposed to:
- Monitor the status of the house and the furniture.
- Detect risky situations.
- Be able to identify/remove a cause of risk.
Scenario 3: Light control
An IE system is supposed to monitor the presence of the user, her status and activities in the house and adapt the house conditions accordingly.
The user is reading a book on an armchair. She is trying to have some relax and would have suitable illumination.
Checking some suitable sensors deployed in the house, the system recognize user status and activity.
The system may react as follows:
- Adjust the illumination usually set by the user while reading.
- Interact with the user to ask whether she would have a particular illumination.
- Wait for some user request.
The system is supposed to
- Recognize user activity and status.
- Be aware of user preference about illumination while reading.
- [Be able to detect emotional status.]
- Decide how and when to interact with the user.
Scenario 4: Find objects
An IE system is supposed to monitor the presence of objects in the house.
The user is looking for the keys of the car. The keys are not in the usual place.
Checking with some suitable sensors deployed in the house, the system recognizes user status and activity. The system is also supposed to be able to locate the keys.
The system may react as follows:
- Provide the user with a suggestion about the position of the keys.
- Ask whether the user is looking for something.
- Wait for some user request.
The system is supposed to:
- Recognize user activity and status.
- Locate some relevant objects (e.g., the keys) in the house.
- Decide how and when to interact with the user.
Scenario 5: Scheduling events
An IE system is supposed to schedule tasks planned by the user. The user plans to activate the washing machine at 6pm. After 7pm energy costs are lower.
The system is supposed to detect the request of activating the washing machine, have knowledge about energy costs and realize the opportunity to save money delaying the task after 7pm.
The system may react as follows:
- Delay the activation of the washing machine informing the user.
- Interact with the user to ask whether she would prefer to delay the activation.
- Inform the user about the possibility to save money if the activation is delayed.
The system is supposed to:
- Detect user requests about furniture usage.
- Have knowledge about energy costs and its temporal distribution.
- Schedule activities.
HOW TO PARTICIPATE
An email should be sent to the Steering Committee using the address aitami.challenge@gmail.com to participate in the selection process for the AITAmI Challenge event. Each mail must contain a cover letter and a System description paper.
The cover letter should contain the following basic information: (a) IE System Name, (b) Country, Affiliation, a Contact Person Name, Contact information (E-mail), (c) Link to a system video (if present), (d) Link to a system web site (if present).
The System Description Paper must contain the following information: a general introduction about the presented system; Innovative technology and scientific contribution; Focus of research/research interests; Re-usability of the system for other research groups; Applicability of the system in the real world; Description of the hardware and software including a list of integrated externally available components (including commercial products, freeware, Open Source, etc.).
The length of the description paper is limited to 8 pages. Please use the same IOS format used in the AITAmI submissions (http://www.iospress.nl/service/authors/latex-and-word-tools-for-book-authors/). System Description Papers accepted to run in the competition will be included in the proceedings of the AITAmI workshop. A Thematic Issue with selected extended papers will be organized on the Journal of Ambient Intelligence and Smart Environments (JAISE - http://www.jaise-journal.org/).
IMPORTANT DATES
Application deadline: 4th of April 2014
Notification of acceptance: t.b.a.
Challenge event: 30th of June 2014
STEERING COMMITTEE
- Juan C. Augusto (Middlesex University, UK)
- Asier Aztiria (University of Mondragon, Spain)
- Tibor Bosse (VU University Amsterdam, Netherlands)
- Diane J. Cook (Washington State University, USA)
- Matjaz Gams (Jožef Stefan Institute, Slovenia)
- Andrea Orlandini (CNR - National Research Council, Italy)
- Carlos Ramos (Institute of Engineering of Porto, Portugal)
- Liping Shen (Shangai Jiaotong University, China)
- Gang Chen (Shangai Jiaotong Univesity, China)