In 2016, UrbanEmissions.Info launched two public portals – http://www.DelhiAirQuality.Info and http://www.IndiaAirQuality.Info, disseminating modeled forecasts of air quality and source contributions for the next 3 days. This is a “bottom-up” approach; with forward linkages to data coming from the monitoring networks (on-ground, mobile, and satellite) to validate, calibrate, and authenticate, as much as possible. Similar systems are utilized by US, EU, and some Asian cities. The open portals are disseminating air quality forecasts for the next 3 days, at the district level in India and at 1 km resolution for Delhi, including hour-by-hour and day-by-day assessment of likely source contributions.
Here is a short conversation with Dr. Sarath Guttikunda, one of the co-founders of UrbanEmissions.info:
When did you start looking at air pollution?
As part of chemical engineering courses at IIT-Kharagpur, we learned to design absorption columns, chimneys, heat exchangers, and pipelines. This dates back to 1994, and doing this math and making those process designs was fascinating; trying to understand what fuels the industrial sector and what is coming out. Then, the courses had more on the production aspects and little on pollution control. This changed with a project asking us to study the absorbing nature of amino solutions to control runaway SO2 and CO2 gases in a process industry. One led to another, and the bug to study air pollution got seeded.
What motivated you to start this project?
In the last five years, India (and Indian cities) have seen a surge in talking about air pollution; though the foundation of these discussions is still based on limited monitoring data. Today, computational space is not a challenge; and the only obstacle at understanding what is best for us is only the lack of reliable information. What we (Urban Emissions) are trying to do is to address this information gap, in depth and girth, from the modeling side, which not only helps us to study the long term impacts of air pollution and long term benefits of emerging pollution policies, but also use the same tools to understand what is coming in the next three days.
I used air pollution models, first in 1998, at the Center for Global and Regional Environmental Research (CGRER), looking at SO2 emissions, their dry and wet deposition schemes, its impacts on soil through acid rain and in the air through sulfate aerosols. Then the computational space was limited; the models were simpler and customized to deal with specific tasks; but all that changed as bigger and faster servers started to flood the market. By 2000, CGRER was part of NASA’s TRACE-P and ACE-Asia missions looking at the long range transport of pollution from Asia over the Pacific ocean. As part of these missions, we built forecasting models to catch the outbreaks of anthropogenic and natural pollution events out of China, Japan, and the Koreas, so that we flight plans can be drawn to go over various parts of the ocean (horizontally and vertically) to capture these signatures. So, as you can see the tools to do short term forecasting via emissions and dispersion modeling is not a new concept – except that today’s models and computational services, enable us to be more efficient, more scientifically robust, and allows us to look at multiple pollutants from multiple sources in multiple dimensions at the same time.
It is about time, India joined the long line of countries doing short-term forecasting and broadcasting the same for public consumption. Through IndiaAirQuality.Info and DelhiAirQuality.Info, we are showcasing one way to do this.
Can you explain what we mean by emissions and dispersion modelling?
It’s a puzzle. Unless you have all the pieces, right size and right shape, it is difficult to see what is to come. These days, models are not the challenge, they are state of the art and a large open community is working on these models to get them better everyday. The computational space is also not the challenge – with some financial support, we can get the best of the best servers, physically in one room or on the cloud. The challenge is in what do we feed these models – information on emissions is the biggest challenge – in terms of how much emissions (mass), where are these emissions (spatial), and when are these emissions (temporal).
There is one version where we download a global inventory and run with it – this is what we call “garbage in, garbage out”, resulting in low confidence, especially if the interest is in conducting short-term forecasts to support health and pollution alerts. The other version is, you work on getting each of the pieces right, sometimes through scientific research, sometimes through reasonable logic, and sometimes through trials. A plethora of information is necessary to feed this piece of the puzzle, ranging from the spatial maps of emission sources (like industries, roads, population, land use), fuel consumption characteristics, to technology to burn the fuels (like coal, biomass, petrol, diesel, gas) and control the pollution (emission rates). The information we have on emissions can also change everyday, with meteorology, social and cultural activities (like festivals), political reasons (like strikes), and holidays, which means the short term puzzles are a bit extra time consuming, but not impossible to get them right. Through this system, we are learning everyday about what we are missing and what we do not understand.
You launched the SIM-Air program in the past. How is this different?
Through the SIM-air program, we tried to help cities, to collate their information on emission sources and built a platform on excel to analyze air pollution, its impacts and be able to provide some rapid assessments of typical policy measures. The methodology and the models used here are very similar to forecasting, except that all the analysis under the SIM-air program is for long-term and as annual averages only, and most suitable for city specific work.
Can we use monitoring data for forecasting?
Ideally, a wide network of monitors and a good understanding of the emission sources is the desired setup for any city. To rely more on the monitoring data than modeling, we need an archive of monitoring data, from multiple locations, and multiple years, to build an understanding of how the pollution in the city behaves with meteorology and in time. Indian cities do not have this infrastructure in place to support such an exercise.
For a city, as big as Delhi, with no geographical difference between its satellite cities, we need at least 100 continuous stations. However, today, there only 13 stations operational under CPCB and DPCC, with only half of them reporting PM2.5 (the key pollutant responsible for pollution exceedance). Also, important to note that these stations give us only a number and cannot give us any information on where is the pollution coming and which source(s) is responsible for the highs. For this, you need some bottom up understanding of the sources, their emission strengths, where they are, and how much they influence the air we breathe.
How can scientists and policymakers use this work? Can this be used for public engagement projects?
As citizens, it is our right to know the quality of air we breathe, the severity of air pollution, and where this pollution is coming from. There are multiple sources and there is little that one can do, with no or limited information. Only when the policymakers takes the lead to address this seriously, will we have any real change towards improving the quality of air. As part of the scientific community, these two open portals are there to support public engagement – by letting people know what the pollution is going to be in the next 3 days, so they can take the necessary precautions; and to support the policy makers – by letting them know where is the pollution coming, so they can plan to avoid the effects in both short and long term.
Find Sarath on Twitter
Are you working on a project that you would like to share with the community? Get in touch!