Monday, April 27, 2015

Final Project: AirQuality


My final project is done! Overall, I am very proud of what I have created. The original intent of this project was to demonstrate real-time graphing of data being sent from remote sensors to a central server. I wanted to project to be scalable so that multiple sources of information could feed the same central server, and the user could choose what information they wanted to display. The system I have in place allows users to select graphs based off individual sensors, or grouped "regions" of sensors and view data as it becomes available in real-time.

To me, this is a clear situation where a RESTful interface would be used for data collection. Since there were multiple participants, no persistent connection needed to be maintained (each data reading is a single POST), REST made the most sense. I broke the overall system up into 2 parts; the "central" instance which I developed using Python / Flask and some RESTful helper libraries, and the "sender" service which ended up being written in C. I also wrote a simulator that produces similar data to the sender service in Python using the Requests library, both to test my RESTful web service as well as to simulate multiple sensor inputs without needing to purchase additional hardware.

The central instance was broken up into 3 parts; the RESTful endpoints that accept new data and display requested data as JSON objects, the "Admin" page that lets administrative users control what senders are allowed to send readings, and the Graphing interface that lets non-privileged users select data to be graphed and see the resulting graphs. In development the admin page does not prompt for a username or password. In a production system I would configure the web server to require logins for that URL path to take the responsibility of authenticating users outside of the web application's control. If I wanted users to "own" groups of sensor nodes or to set up a multi-tenant instance (SaaS), then I would set up an authentication stack inside the code it's self. All of this design was largely complete before I selected the sensor to monitor the gas I wanted, because in essence, all sensors that measure gas as a concentration in a volume of air work on the same principal. Once I had the basic functionality working, I dug into Twitter Bootstrap a bit to make the interface look nicer.

After originally looking at the prices for CO2 sensors I decided to look for a (much cheaper) CO sensor instead. The sensor I ended up settling on was the Hanwei MQ-2, which came in a larger kit of sensors by Sunfounder Labs. I received band information when initially researching this sensor, and it turns out the MQ-2 was not a Carbon Monoxide detector as I had originally thought. Instead, it is a combustable gas detector, which makes it easier to demonstrate (all you need is a lighter) but doesn't quite measure what I was hoping to observe. The general principal is the same, however, so with the right sensor the only piece of the code that would need to be rewritten would be the sender agent on the Raspberry Pi. A wiring diagram of connecting the MQ-2 to the Raspberry Pi through the provided ADC (analog-to-digital) converter is in this post, as well as in the code on GitHub. This was difficult to figure out, given I had limited previous experience with translating wiring diagrams into completed projects.



Once I did have it wired, however, I was happy that the demo code provided by Sunfounder worked as expected. This was soon followed by frustration when translating that code into Python to use the GPIO library and Requests (my original intent) did not work the same as it did in C. I suspect that either some behavior the GPIO library tries to protect against (and the WiringPi C library does not) was preventing results, or Python was sampling the data too slowly to get numbers. After some experimenting running the C example in a subprocess and using Python to send the result, I ended up learning a bit of C (namely, libcurl and libconfig) to expand the sample code into a complete sender service.

As it stands right now, the system is working as intended. After setting up the Python / MongoDB environment for the central instance, you can start one or more simulator instances and/or wired MQ-2 sensors all pointing at the same central instance and graph any part of that. To accept new sensors into the system and assign them names and region groupings, click the button for the Admin Page. Once graphing, to change the values on the MQ-2 sensor graph, simply hold a lit lighter up to the sensor for a few seconds and observe the values on the graph increase. The values of the simulated sensor nodes will rise and fall between 80-120 (ppm/v) randomly on their own. Those values are based of the data observed when testing the actual MQ-2 sensor without zeroing it out first. In production, I would let the sensor run for a few minutes and determine a "zero" before sending data. This makes for much less interesting graphs and I decided to leave it uncalibrated during the demo for effect.

There are still a few things I would do differently or add on given more time. Waiting to select the sensor toward the end worked out, but resulted in measurement of an undesired metric. With more money and time I would have selected better components and gotten more comfortable with them before beginning the project, perhaps tried a few out before settling one one. There are also some new features that would really improve the project overall. The next step in functionality I see for this project would be assigning GPS coordinates to the sensor nodes in the central instance and plotting them on a map. I had considered doing this for the project, but backed away when I realized how much work was involved in learning mapping as well as learning graphing libraries, and graphing was my primary focus. The next version of this project could include color-coded pins on a map, the color indicating a pre-defined threshold of "air quality" based off what concentration of that particular gas is safe for humans. Plotting these nodes, mixed with the concept of regions, would let me close off geometric shapes on the map overlay and color-code those as well. This would give the user a point of reference when correlating graph data to real world locations. I had also briefly considered setting up email alerts when levels crossed certain thresholds for configured period of times; i.e. "bad air reports". I thought this would be too hard to demonstrate in the class, but would make a good future enhancement.

Links

Monday, April 20, 2015

Environmental IoT: References

1. The Changing Paradigm of Air Pollution Monitoring
Emily G. Snyder, Timothy H. Watkins, Paul A. Solomon, Eben D. Thoma, Ronald W. Williams, Gayle S. W. Hagler, David Shelow, David A. Hindin, Vasu J. Kilaru, and Peter W. Preuss
Environmental Science & Technology 2013 47 (20), 11369-11377
DOI: 10.1021/es4022602. http://pubs.acs.org/doi/abs/10.1021/es4022602 (April 2 2015)

2. GreenBiz 101: What you need to know about the Internet of Things
Martin LaMonica. http://www.greenbiz.com/blog/2014/05/12/greenbiz-101-what-do-you-need-know-about-internet-things. (April 2 2015)

3. Mapping Traffic Pollution Exposure: the Quantified Self
J. J. Huck, J. D. Whyatt, P. Coulton, A. Gradinar. http://eprints.lancs.ac.uk/69279/1/4._Mapping_Traffic_Pollution_Exposure.pdf. (April 2 2015)

4. Ubiquitous Sensor Networking for Development (USN4D): An Application to Pollution Monitoring
Antoine Bagula, Marco Zennaro, Gordon Inggs, Simon Scott and David Gascon
Sensors 2012, 12(1), 391-414
DOI:10.3390/s120100391. http://www.mdpi.com/1424-8220/12/1/391/htm. (April 2 2015)

5. Air Quality Egg. http://airqualityegg.com/. (April 2 2015)

6. AirPi. http://airpi.es/index.php. (April 2 2015)

7. AirCasting. http://aircasting.org/. (April 2 2015)

8. CitiSense. http://www.gizmag.com/citisense-air-quality-monitor/25512/. (April 2 2015)

9. HabitatMap. http://www.habitatmap.org/. (April 2 2015)

10. Sensing Earthquakes and Forest Fires
Anne Field. http://newsroom.cisco.com/press-release-content?type=webcontent&articleId=1489600. (April 2 2015)

11. Forest Watch system with Axis cameras helps protect forests against fire. http://www.axis.com/success_stories/viewstory.php?case_id=3722. (April 2 2015)

12. Iot Forest Environmental Factors Collection Platform Based On Zigbee
Zhang Yu, Liu Xugang, Geng Xue, Li Dan. Cybernetics and Information Technologies. Volume 14, Issue 5, Pages 51–62, ISSN (Online) 1314-4081, DOI: 10.2478/cait-2014-0043, December 2014. http://www.degruyter.com/view/j/cait.2014.14.issue-5/cait-2014-0043/cait-2014-0043.xml. (April 2 2015)

13. Zenbotica FireFly. http://www.zenbotica.com/firefly.html. (April 2 2015)

14. Transforming Earthquake Detection and Science Through Citizen Seismology.
Jason C. Young, David J. Wald, Paul S. Earle, Lea A. Shanley. Washington, DC:
Woodrow Wilson International Center for Scholars, 2013. http://www.wilsoncenter.org/sites/default/files/CitizenSeismology_FINAL.pdf. (April 2 2015)

15. Tweets give USGS early warning on earthquakes. Frank Konkel. http://fcw.com/articles/2013/02/06/twitter-earthquake.aspx. (April 2 2015)

16. How Japan's Earthquake and Tsunami Warning Systems Work. Will Knight. http://www.technologyreview.com/view/423279/how-japans-earthquake-and-tsunami-warning-systems-work/. (April 2 2015)

17. MAKE: PROJECTS Earthquake Detector. http://makezine.com/projects/earthquake-detector/. (April 2 2015)

18. The Quake-Catcher Network. http://qcn.stanford.edu/. (April 2 2015)

19. The Internet of Things Could Drown Our Environment in Gadgets.
Klint Finley. http://www.wired.com/2014/06/green-iot/. (April 2 2015)

20. Mobile Crowdsensing: Current State and Future Challenges.
Raghu K. Ganti, Fan Ye, and Hui Lei, IBM T. J. Watson Research Center. IEEE Communications Magazine, November 2011. http://csce.uark.edu/~tingxiny/courses/5013sp14/reading/Ganti2011MCC.pdf. (April 2 2015)

Environmental IoT: Challenges and Conclusions

The use of IoT in the environment has a long way to go, but projects are starting to develop and new ideas are being tried out. One interesting article I came across while researching this topic actually looked at IoT and the environment from a slightly different angle; the impact a “smarter environment” may have on the environment itself. Because most IoT solutions are subject to rapid hardware interactions and changing requirements, a lot of electronic waste is generated by developing IoT solutions, even those intended on benefiting the environment. Many of the places small electronics are produced have relatively unknown manufacturing conditions and could be causing harm while the end product is intended to be used for good. The end effect is that as IoT use in the environment increases, the environment itself could suffer for it. A few suggested ways of mitigating this are the use of open source code and sensors. If a proprietary application or sensor is abandoned by it’s creator, it is effectively dead and must be discarded. If the code and sensor are open, maintenance could move to the next interested party and the technology will survive obsolescence better [19].

Another concern that has come out of the increase in IoT projects intended to report on the environment is the trustworthiness of the data. Currently, the public does not have access to the quality of sensors that larger organizations do because of cost constraints and other mitigating factors. There are no strict standards enforcing data, so using data from multiple places could yield incorrect results. On top of that, aggregating data from different sensors monitoring the same thing could yield incorrect information as well [14]. One suggestion to approach this problem is, for now, not to use community generated data from these IoT systems alone. Instead, use it to supplement existing data from larger organizations [1]. One step that would go a long way would be for larger organizations to make more of this data available for public consumption. Also, larger organizations taking very public actions to develop systems for public use would encourage the development of ever more useful services.

Internet of Things applications, by their very nature, have some technical hurdles as well. They have high bandwidth requirements that grow quickly as more sensors are included in a network. They have high storage costs that must be paid for by someone. And with all that data comes access problems and privacy implications [20]. These are all issues that need to be worked out in the short term before IoT technologies on the scale of monitoring the environment become more feasible. Yet there are constant advances that make these systems more and more practical. Data storage is getting cheaper and easier to distribute, and technologies like Zygbee (which are designed with power constraints in mind) make leaving a sensor in a remote location for an extended period of time more reasonable. While many of these systems are not yet ready to become part of our everyday life, people are thinking about these problems and coming up with approaches daily. Who knows what it will look like in a few years.

Earthquake Detection

Another early-warning application is earthquake detection. Earthquakes are typically detected by watching a large area for a larger range, faster moving “P-wave”. This wave may come minutes, or often seconds, before the more damaging “S-wave”. This is a particularly difficult problem because the earth is constantly moving, so several measurement points need to agree on seismic activity to prevent false positives. Since the time difference between the waves is never guaranteed, it is possible (and likely) that current systems will not yet have agreed that a seismic event is occurring by the time the S-Wave hits. Traditionally, the systems that monitor for these events are run by larger organizations, but because of frustration with the current systems private sector projects are starting to fill in perceived gaps [14].

One private solution to earthquake monitoring is more of a cloud computing solution, not purely Internet of Things. It is called TED, or Twitter Earthquake Detector. The system is constantly monitoring Twitter for people reporting (presumably on mobile devices) they felt the ground shake. When enough data is collected to suggest an earthquake is happening / has happened, TED can send out alerts. While the initial data requires human interaction, the responses to TED’s data do not need to. Larger organizations have considered integrating systems like TED into their solutions because of the speed at which it can diagnose earthquakes. While current systems can take several hours to agree on an area affected by an earthquake, TED can often do it in 20 minutes [15].

Japan has had an earthquake detection system for a number of years, relying mostly on data gathered from satellites monitoring ground stations deployed on land or under water. In the last several years, though, they have been using existing networks to tie alerts generated by this system into many pieces of critical infrastructure. In 2011, Japan suffered an 8.9 magnitude earthquake and subsequent tsunami. The loss of life was high, but could have been much higher if not for the earthquake monitoring system. Within seconds of noticing the event, the earthquake monitoring system was able to send out automated cell phone alerts, broadcast on TV, shut down industrial facilities, transportation systems and elevators. There is no way of telling how many lives were saved by this system [16].

Of course, there are smaller projects mostly focused on hobbiest or DIY “makers”. In 2010, Make Magazine posted an article demonstrating how to make a seismograph with digital reporting capabilities using consumer grade electronics. One of the suggestions in the article was to use the signal sent by the system to effect smart home technologies, like shutting off the gas valve [17]. Stanford University runs a hobbyist earthquake monitoring system called the “Quake Catcher Network”. The website provides a place to obtain a kit, instructions on how to set it up, and a map that shows seismic activity from all over the world [18]. It has been cautioned, however, that the results from these smaller DIY systems do not replace the need for the larger, better funded solutions and should be trusted only so far [20], something that will be covered in more detail in the conclusions.

Forest Fire Detection

Early warning systems are a very new area in IoT technology, and few are using much in the way of Internet aware design, but many more plan to. One example of an early warning system would be a forest fire monitor. Very little has actually been accomplished here, most is still in the planning or proof of concept stage. A few reasons for this could be the difficulty of the problem, the scope, and the expense required to do it well. Currently, a few systems that are in the works are using a variety of methods to detect a fire. The vast majority of them use satellite telemetry data. While the vast majority of  them are using satellite telemetry data, and while this does not qualify as IoT, these systems are augmenting this data using information available from the Internet. Other systems are experimenting with visual fire detecting algorithms and overall monitoring of forest environmental conditions in an effort to predict fires.

Currently the biggest forest fire monitoring and reporting application is Active Fire Mapping which is run by the USDA Forest service. This application has been in use for a number of years and watches fire activity all over the United States. The system gathers data by once-a-day satellite flyovers that contact ground stations in fixed areas. The ground stations report conditions to the satellites and they in turn determine if there is a fire and where it’s projected path is in it’s projected path. Currently, using this method, the system is accurate within about 1km. Recently, it was announced that the USDA is looking to augment the system using smaller network attached sensors on prop planes. These could fly over known fire incidents and collect data down to 30m of accuracy [10].

Another fire monitoring solution comes out of Russia. The Nizhny Novgorod Forest Fire Center is an application that monitors several large sections of state-owned forest. Their existing technology had been in place for a while, but response times to fires was still too slow. Recently, they installed several network-attached cameras to critical points within the forest. When a fire is detected, operators can use this streaming data to gather information and better fight the fire [11]. There are no details if the cameras have a fire detection algorithm (really, there were few details on how the system works at all), but this is an example of how fires could be programmatically identified. By monitoring a fixed section of forest and using an image analysis algorithm, it is possible to identify when the section is on fire. Again, it is unclear if that is what is happening here, but it is a logical next step.

A slightly different approach to monitoring forest fires was developed in China. Several researchers there set up the FEFCP (Forest Environment Factors Communication Platform). This is another Zygbee-based sensor network that monitors environmental conditions in several forested areas of China. The sensor network regularly reports light levels, soil conditions, humidity and temperature in an effort to capture overall forest health. While not specifically targeted at identifying fires, it is suggested in the study that data could be used to predict “fire risk days”. The researchers said this data could be used to choose which nodes to power in an effort to conserve energy, but the data could also be used to prevent fires from starting [12].

When trying to locate an actual functioning fire monitoring system, I came across a mock-up of a system presented by a 14 year old owner of the startup company Zenbotica. The interface was presented at the IoT Innovation World Cup 2015 in Munich. It shows how a single dashboard could be used to track sensor readings over a wide area, plot incidents on a map, as well as report information about firefighters in the field. The system has not been implemented, but has been noted several times as a good concept for such an application [13].

Air Quality Monitoring

The majority of in-depth work in environmental IoT has been done here. For the most part, research in this area covers sensor networks used to measure and report air quality metrics for a given area. Air quality is typically measured in one of two ways; using a reactive component in a sensor to calculate the parts per million in a volume of air (ppm/v), or by measuring light wavelengths over a period of time and detecting particles in the air [1]. One of the objectives of monitoring air quality in this manner is to provide information about the air to the population that was previously not available. It has been suggested that air quality sensor networks measuring information like this could produce “ozone alerts”, like traffic alerts, and commuters could use this information when planning a route [2].

A few academic studies have been performed in this area. One of them out of Lancaster University (UK), used consumer-grade “maker” electronics to perform a study on how pollution levels impact individual commuters. They developed wearable sensors using Arduino boards that they gave to study participants to wear as they commuted to and from campus. The information was reported back to a central instance using an application on a smart phone. The results did not agree on exact concentrations of pollution at given times, but using data plotted on a map, the researchers were able to identify “high” and “low” areas of pollution during peak travel times. The data inconsistencies were attributed to the low-quality sensors used, a requirement of both the cost and power requirements of a wearable device. The research team did propose doing a follow-up study where they performed an exit interview with the participants. Many participants reported feeling like they had traveled through an area with “bad pollution”, only to find the data did not back that up. This seems to indicated there is difference between perceived and measurable levels of air pollution in our environment [3].

Another system to measure pollution was set up in Cape Town, South Africa. This system was established to be a proof-of-concept “WaspNet” for sensor networks. The idea here being that with a central system many types of metrics for different environmental factors could be collected centrally and be made available for use. The system was designed to address several of the problems surrounding deployments of this kind; namely poor network connectivity and power. The central technology used for data collection and reporting was Zygbee which is a low-power network communication protocol. The author of the study stressed the importance of well documented standards and open-source sensors as a strategy for building a robust and sustainable solution. The centralized and open nature of such a system also lowers the barrier of entry for developing nations, meaning organizations that could not afford to build this from the ground up could purchase a few sensors and participate, gathering valuable information. At the time of this writing, the data collected by the system was not available for public consumption, but there are plans to offer that in the future [4].

I was surprised that I did not find many city-scale projects like this in my research. I did, however, find quite a number of smaller grassroots efforts. Many of these had similar approaches and scope, some even using the same technology. The end goal was developing a network that private citizens could afford to tap into and get useful data. The Air Quality Egg provides a $185 boxed kit, and has a website with a real-time map showing data collected from participants [5]. The AirPi project was featured on the Raspberry Pi homepage and features a set of do-it-yourself instructions, parts, and code to build an air quality ground station. It also has a map-based reporting interface [6]. The AirCasting project allows for a number of different sensors to connect to an Android OS application and upload data to a central location that way [7]. One of the projects to use this delivery method is CitiSense, which provides a wearable sensor about the size of a wallet intended to be clipped to a backpack. The sensor uploads data to the CitiSense reporting page using AirCasting as the passthrough [8]. HabitatMap also uses AirCasting for delivery, but instead uses it’s own “AirBeam” sensor, which is an Arduino board, intended to be carried. The site also has information and activities related to pollution awareness at the local level [9].

Environmental IoT: The Internet and Our Natural World

The following few posts comprise the write-up for my topic research this semester. Overall, I found the environmental science uses of IoT technology to be in a very early stage, with more expected to occur in the near future than has already happened. More detail on each section I investigated will be in each section. Links will be provided to each section below.

When investigating the areas of study in environmental Internet of Things, that is IoT technologies as they react and act upon the natural world, I was surprised to find this is a young concept within a field that is itself new. Computer applications that monitor events in the environment have been around for decades, but many of them rely on older technologies to function. The overwhelming majority of applications that touch the natural world use proprietary communication protocols to talk to closed systems, and very rarely do anything over the commodity Internet. The majority of IoT research seems to be devoted to home automation, shipping and delivery; things that make money. That said, there is interest here, mostly from small start-up businesses and community grassroots organizations. The primary areas of research are Pollution Monitoring (air and water quality) and Early Alert Systems (forest fires, earthquakes).

So who is doing this work? Currently, it seems to be mostly research organizations and startups. The latter seem to be focused largely on products they can bring to market, pollution sensor wearables and the like. The former are working on larger projects to see how Internet connected systems can help us understand and react to our natural world. Traditionally, this sort of work has been performed by governments and large organizations, largely for reasons of legal compliance. It is expensive to set up these systems, smaller organizations do not have access to the scale or quality of equipment to match the older systems, which is why you see more networks set up using older protocols and not leveraging the Internet as much. However, local community interest and maker culture are filling in some of these perceived gaps using IoT concepts [1].

I have broken up the coverage of this topic into a few posts, covering the sub-sections of the sort of work taking place, as well as perceived challenges and my conclusions about the field.


  1. Air Quality Monitoring
  2. Forest Fire Detection
  3. Earthquake Detection
  4. Challenges and Conclusions
  5. References
  6. Slides

Saturday, March 28, 2015

Final Project: Service Specifications and IOT level

Here are all of the high-level overviews of the services I intend to create for my air quality monitor. There are a total of 4.

  • GasReadingService runs on a schedule on each node (10 seconds), gets a reading from the sensor and sends the value to ReadingSendingService for delivery.
  • ReadingSendingService sends the timestamped reading from GasSendingService, along with a unique identifier for the node the reading originated on. This contacts a RESTful endpoint on the central control node.
  • CentralDataCollectionService runs on the central control node and accepts the GasLevelReading values as POST data from the ReadingSendingService. If everything checks out, it stores the values in a database. The service also provides an interface to get a series of GasLevelReading objects back out of the database given a node identifier and a timestamp series.
  • CentralDataDisplayService runs an HTTP front-end that users interact with. Through the interface, users supply a set of nodes they are interested in and a timestamp range, and the CentralDataCollectionService is contacted to get these values. The display service then renders a graph to give the user the data requested.

Finally, this system will be operating at IoT Level 4. This is because there are more than 1 nodes collecting data, running independent of one another, which send the values to a central location for storage and analysis.

GasReadingService


ReadingSendingService


CentralDataCollectionService


CentralDataDisplayService


Final Project: Domain Model and Information Model Specs

Here are some diagrams of my proposed Domain Model (how all the pieces of the system interact) and Information Model (what data looks like) for my final project; an air quality monitor. Because of the complexity of measuring "air quality", I have decided to focus on monitoring a single gas (Carbon Monoxide, or CO). Monitoring additional gasses and environmental readings would have similar Domain and Information models.

Domain Model

Information Model


Sunday, March 15, 2015

Django RESTful Example

The other topic covered in homework this week was setting up a RESTful service using Django. The example provided in the book is a Weather Station mock-up. I have implemented this example and made a few changes of my own to make it work. I did not use MySQL as the database back-end, which is not a hard requirement since Django's data representation libraries let you change the back-end without modifying the object structure. Code is here.

Setup Instructions

  1. Install Python 2.7 with Virtualenv extension
  2. 'virtualenv weatherstation' to create a new blank "weatherstation" Virtualenv container
  3. 'source weatherstation/bin/activate' to turn on Virtualenv container
  4. 'pip install -r requirements.txt' to install all the required packages for the Django setup
  5. 'python mange.py syncdb' to create a new database
  6. When prompted to add a new user, say 'yes' and use 'username' and 'password'

Running Instructions

  1. Activate the weatherstation Virtualenv container
  2. 'python manage.py runserver'
  3. Add at least one piece of data and browse to 'http://127.0.0.1:8000/home'

Adding Data

Modify the following line to as needed.

curl -i -H "Content-Type: application/json" -X POST -d '{"name":"TestCity", "timestamp": "123456", "temperature": "46", "lat": "30.123456", "lon": "76.123456"}' http://127.0.0.1:8000/station/ -u username:password

What's Happening Here?

The URI endpoint "/station" allows people with the correct permissions (username:password) to create new "Station" objects in Django using a POST request and a JSON object in the POST data. These objects contain a name along with some temperature and location information. When the /home URL is loaded, the "requests" Python library makes a GET request and displays the information from the last update using the "index.html" template. The values passed to the template from the view replace the {{ values }} of the same name.

WAMP Publisher / Subscriber Example

One of the topics covered in our homework this week was setting up an asynchronous WAMP publisher / subscriber example using Python. WAMP stands for "Web Application Messaging Protocol", which is a sub-protocol which leverages Websockets (which in turn leverage HTTP) to set up asynchronous communication models. The Python implementation uses AutobahnPython (WAMP implemented on the Twisted network library) to do this. The example code in the book and the procedures used to do this are unfortunately out of date, so the code needed to be modified to get this to work. I had to modify each code sample to clean up some spelling / syntax, and add a "main" function at the bottom to start an ApplicationRunner using the ApplicationSession defined above. Code is available here.

Setup Instructions

  1. Install Python 2.7 with Virtualenv extension
  2. 'virtualenv wamp' to create a new blank "wamp" Virtualenv container
  3. 'source wamp/bin/activate' to turn on Virtualenv container
  4. 'pip install crossbar' to install Crossbar.io WAMP router, will get AutobahnPython as a dependancy
  5. Create a new blank Crossbar.io server to use as a WAMP router
    1. 'mkdir server'
    2. 'cd server'
    3. 'crossbar init'

Usage Instructions

  1. Start the WAMP router
    1. 'cd server'
    2. 'crossbar start'
  2. Start the Subscriber (./subscriberApp.py)
  3. Start the Publisher (./publisherApp.py)

What's Going On Here

When the server is started, it is accepting all new topics published to 'realm1' without any authentication. When the subscriber joins, it is telling the server "please send me any new updates to com.example.test-topic". When the publisher joins, it starts an infinate loop of taking the current time and updating the topic 'com.example.test-topic' with that value. The server then sends those updates to the subscriber client, which prints them out.

Monday, March 9, 2015

Planning Final Project

This week, we were asked to start planning our final projects in more detail by completing the Purpose / Requirement Specification and Process Specification. The project I would like to complete would be a sensor network for monitoring air quality over a wide area. Because I do not have the time or finances to set up an actual network, I will create a single "node" as proof-of-concept, then simulate others to create the effect of multiple nodes reporting data.

Purpose / Requirement Specification


Purpose

The purpose of this project is to provide an assessment of air quality over an area. This is accomplished by setting up a network of air quality nodes that all report data back to a centralized instance on regular intervals. The central instance is accessible using a dashboard, in which a graph of sensor readings from nodes is visible. Individual nodes report back information about air quality around them. This data can be viewed in an aggregate for an area and used to provide an assessment of the air quality in the areas being watched by the sensor network. The intended users of this system will be people interested in measuring the air quality of an area, such as health officials or environmental scientists.

Behavior

The nodes participating in the sensor network will take reading on regular intervals and send them to the central instance. Air quality is generally measured by the following readings;

- CO levels (carbon monixide)
- CO2 levels (carbon dioxide)
- NO2 levels (nitrogen monoxide)
- O3 (ozone)
- particulate matter

Note: For the purposes of this class, only data readings from the CO sensor will be implemented because of time and cost constraints. Additionally, only a single physical node will be created. Additional nodes will be simulated using simple random number generators reporting manufactured data to the central instance.

System Management Requirement

The system will have a single central instance responsible or information processing, storage and display. One or more sensor nodes will report data to the central instance. Each sensor node knows how to find the central instance, and any information about the location of any given node is stored on the central instance. This way, sensor nodes can join and drop off the system as needed.

Data Analysis Requirement

Data will be stored and graphed using an interface on the central instance. The primary purpose of the interface will be to display real-time information over a user specified region. The information gathered by the network could be used to automate reports about so-called "bad-air" days, make predictions about air quality trends, or chart the change of air quality over time / space. However, that may be outside the scope of the work achievable in this class.

Application Deployment Requirement

The central instance for data collection and display will need to be "always on" and accessible, so it will be hosted on a cloud service provider like Amazon A2 or similar. The sensor nodes are physical entities running a limited software load for collecting data and transmitting it to the central instance, so they will need to be located where they are collecting the readings.

Security Requirement

The central instance will have 2 views; a general "read-only" view that users interested in consuming the data can view, and a administrative view responsible for accepting new nodes into the network and managing those already participating. The administrative back-end is separated so that the general users can not introduce data providers into the network or remove existing ones, insuring the data is collected from authorized participants only.

Process Specification


Process Diagram for Sensor Readings


Process Diagram for User Interface Interactions


Saturday, February 21, 2015

Push Button Example

Following the guidelines in our textbook, and adding a few improvements of my own, I have created a script that lets someone interact with an LED using a push button, all driven by software. The script starts a loop that looks for changes in the switch's GPIO pin state, and toggles a state on the LED's GPIO pin accordingly. The script toggles between 4 states in order as the button is pressed; off, on, flickering slowly and flickering more quickly.

A few things I noticed that were unexpected; first, I am really new to wiring diagrams so figuring out how to get the thing wired up on my breadboard took a bit of time.

Second, I was expecting the value set on the GPIO pin by the switch to be binary pressed / not pressed, defaulting to not pressed as False. Instead, the button seems to default to True, so I am assuming the GPIO pin is reading open / closed (defaulting to True for open). This could be a mistake, relating to my inexperience wiring things, but I got it working so I will follow up next class.

Third, software debouncing can be achieved by keeping a boolean variable tracking if state is allowed to be changed or not. Without the debouncing, the state of the switch is read each time the loop cycles, sometimes causing something to happen multiple times because the value was read more than once while the button was down. Only allow state changes to occur if the button is pressed and the tracking variable is set to True. The first time a state change is detected on the switch, set the tracking variable to False before toggling anything. If the button is not being pressed, set the tracking variable to True so the next time is is pressed, state can change. This means only one thing will happen each time you press the button, no matter how long it is held down.

Here is my source code and a wiring diagram of the setup.


And here is the source code.

Wiring diagrams courtesy of Digi-Key's excellent (and free) online schematic editor SchemeIt.

Monday, February 9, 2015

Getting my Pi Ready

Before I can build anything to study IOT technologies, I need a thing. Like most people in the class, I have selected the Raspberry Pi Model B+ as my development target. I bought a kit from Canakit with the board, SD card and GPIO breakout board all included on Amazon (http://amzn.com/B00G1PNG54). I put the board in the case, put the SD card in the slot and booted it up. I have loaded Raspbian on as the primary OS, which works well for me since I have worked with Debian Linux based systems before. After install, I did a few things:

  1. Connected the system to WiFi. The kit came with a WiFI adapter, so I plugged it in and started the "WiFi Config" utiltiy on the desktop. Turns out, this is a front-end to wpa_supplicant, a set of tools for managing WiFi networks. Once I had connected to a network, all networks were available in /etc/wpa_supplicant/wpa_supplicant.conf, and selected in order of preference top-to-bottom. This was important for a later step.
  2. Patched it. On the pi, this is done by executing "sudo apt-get update" followed by "sudo apt-get dist-upgrade". This took a while.
  3. Installed a VNC server, as recommended by the instructor (sudo apt-get install thinvncserver). SSH with X forwarding can get most things done, but having a GUI for some things is nice too. Once my coding project is done I will likely turn off X and the VNC server to save resources.
  4. Set up my phone as an access point and configured the Pi to prefer it. This lets me plug the Pi in and I can SSH  / VNC from my laptop if both are joined to my phone. Important for working in class where monitors are limited. Using the command "nmap XX.XX.XX.0/24", replacing the network with my access point's, I am able to find the Pi if he gets a different DHCP lease between sessions.
  5. Set up the VNC server to boot at startup. I got instructions on that here (http://elinux.org/RPi_VNC_Server).
Now, I can sit in class and hack on my Pi! I just need something to do.

Research Topic: The Internet of Things and the Environment

For my first bit of research into "The Internet of Things", I want to take a look at the impact of IOT systems on environmental science and response to natural disasters. This is a huge topic, much too large to cover in one semester, so of particular interest to me are sensor networks being used to monitor / respond to events like forest fires, as well as those that could be used to track pollution and global warming. Being able to set up relatively cheap things to constantly send data over the Internet has increased our ability to react to changes in real time, and I want to look at how this is being used, or will be used in the future.