I’ve had Nanda Devi and the Sanctuary surrounding her in my thoughts for a very long time, and she seemed like a fitting first attempt to bring spatial data out of the digital world and into reality. For the uninitiated, Nanda Devi is a mountain in the Indian Himalaya, and she’s always referred to as she: the goddess in the clouds. Surrounded by a protective ring of mountains, she towers over them all, and this space between the ring and the central peak is known as the Nanda Devi Sanctuary. Due to this ring, the first entry into the Sanctuary was only made in 1934, by Shipton and Tilman and their three porters, who entered via the gorge of the Rishi Ganga; the mountain herself was first summited in 1936 (see- Nanda Devi: Exploration and Ascent, by Shipton and Tilman).

The geography of the region is fascinating ( and the history as well; there’s a nuclear-powered CIA device somewhere inside the Sanctuary!) and the heights and depths of the various relief features make it a joy to visualise. In this post, I’m going to describe, in brief, the steps I used to get from the data to the final model in wood. While I’m sure most of this can be done using open-source tools, as a result of my current University of Cambridge student status and my @cammakespace membership, I have access to (extremely expensive) ESRI and Vectric software, which I’ve used liberally.Relief map of the Nanda Devi Sanctuary and the Rishi Ganga gorge (dark->light = low->high)

I have a repository of digital elevation data collected by the Space Shuttle Endeavour in 2000 (STS-99; Shuttle Radar Topography Mission). It’s freely available from CGAIR-CSI (http://srtm.csi.cgiar.org/) and is not difficult to use. In QGIS, it was cut and trimmed down to my area of interest around Nanda Devi; I was looking for a rough crop that would include the peak, the ring and the Rishi Ganga gorge. This relief map was exported as a GeoTIFF, and opened up in ArcScene, which is ESRI’s 3D cartography/analysis workhorse. ArcScene allowed me to convert the raster image into a multipoint file; as the tool description states, it “converts raster cell centers into multipoint features whose Z values reflect the raster cell value.” For some reason, this required a lot of tweaking to accurately represent the Z-values, but I finally got the point cloud to look the way I wanted it to in ArcScene.

The point cloud (red dots), overlaid on the relief map in ESRI ArcScene

I then exported the 3D model of the point cloud in the .wrl format (wrl for ‘world’) which is the only 3D format ArcScene knows, and used MeshLab, which is an open source Swiss-knife type tool for 3D formats, to convert the .wrl file into a stereolithographic (.stl) file which the next tool in the workflow, Vectric Cut3D, was very happy with. As a side note, Makerware was also satisfied with the .stl file, so it is 3D-print ready.

The CNC router-ready model in Vectric Cut3D

More tweaking in Cut3D to get the appearance right, and the toolpaths in order, and I was ready to actually begin machining. After an abortive first attempt where the router pulled up my workpiece and ate it, I spent some more time on the clamping for my second attempt. First, I used the router to cut out a pocket in a piece of scrap plywood to act as my job clamp; this pocket matched the dimensions of my workpiece exactly. After a bit of drilling, I had my workpiece securely attached to the job clamp, which was screwed into the spoilboard on the router.

The CNC router doing its thing

For the actual routing itself, I used two tools; a 4mm ballnose mill and a 2mm endmill for the roughing and finishing respectively. It took about 45 minutes for the CNC router to create this piece. I love the machine, and am very grateful to the Cambridge Makespace for the access I have to it.

The final product

In the near future, I’m going to try and use different CNC router tools and types of woods to make the final product look neater; specifically, a 1mm ballnose tool for the finishing toolpath would be very nice! I’m also going to try and make relief models of a few other interesting physical features. While I am happy with this initial representation of Nanda Devi, if you have any suggestions as to improvements for future work, I’d be very happy to hear about them! I’d especially like to know if there are any open-source tools out there that can replicate the steps I needed to use ArcScene and Cut3D for.

So the *Monthly Maps* series is almost on the verge of becoming a *Bi-Monthly Maps* series! Hopefully this will be the only double month issue of the year 2014.
 

Let us begin with a map that is not really a map, but an efficient two-dimensional machine-readable representation of three-dimensional satellite imagery, which has a strange haunting appearance of a map of a disaster zone. Clement Valla, creator of this stunning work, explains that though “[t]hey may look like glitched maps, disaster scenes, cubist collages… these images are produced for other computers to use—to apply color and texture to 3d forms. These images are efficient vectors of information. But unlike a long list of 1s and 0s, or some other cold alien encoding, they still look like the objects they represent. They are uncannily close to photographs or human made collages.”

clement valla - 3d maps minus 3d

Development Seed has launched the Afghanistan Open Data Project in anticipation of the upcoming national election in the country. It is described as a “community efforts to release into the public domain a combination of political, social, and economic datasets of significance to elections in Afghanistan.” The map below displays the percentage of polling centers in each province that did not report poll results in the 2009 election.

development seed - afghanistan open data project

Continue reading

Every project I have been involved thus far, I have helped people to ask the question – ‘Are maps really the right tool for us to tell this story?’ And I must say, there are not many people who are convinced. Maps are cool, they look nice, you can make them interactive, they may go viral (for good or bad), and yes, people like maps. Agree and that’s one of the many reasons why I love making maps and telling stories through them. If you do not ask the question, several things can go wrong.

I put together a repository to start gathering few examples of situations when maps go wrong. And spoke at an event in Bangalore and it was exciting. We will see some of those in this blog post. I am not intending to provide solutions to most of these, that will make a better blog post later. Broadly, there are six lists –

Misrepresentation of data

Careless handling of images and data can cause terrible mistakes, like the one below from the CNN a few weeks back.

CNN - Hong Kong is now in Brazil

Continue reading

Here is another double issue of Monthly Maps to begin the new year.

The end of the year saw several great “best maps of 2013″ posts. We will go to them soon but first let’s look at the map that got the “worst map of 2013″ award from Kenneth Field, the Cartonerd. In his famous words, it features a “symposium of technicolour psychedelic vomit across the map.”

cartonerd - worst map of 2013

This beautiful three-dimensional globe-based visualisation of surface wind speed (powered by D3) was featured on both Kenneth Field’s “favourite maps from 2013″ and Wired MapLab’s “the most amazing, beautiful and viral maps of the year” posts.

nullschool.net - earth wind map

Continue reading

Our tutorials so far have been focused on several aspects of cartography, from data structures to their analysis and representation. Not surprisingly, most of them are aligned to web technologies, and client browsers expect the applications to consume relatively low resources. Spatial data have comparatively higher memory footprints owing to the structure and the amount of information they hold. For instance, the taluk boundary level data for India is 46.3 MB in GeoJSON format. This means that it cannot be used directly in a web project; it needs to be optimised first.

Optimising spatial data essentially translates to simplifying the geometries in the file. Since, in a web context, it is not using it for analysis, a slight difference in the area or shape of corners will not make a huge difference. Users may not even realise that the shapes are simplified, if it is done in just the right way.

To give you an idea of the process, have a look at the following maps of Florida. The first row showcases the original data from the Florida Geographic Data Library, converted to GeoJSON (8.2MB). The second set of images shows the simplification of the geometry (note the sharp edges) in the GeoJSON (now 427KB). This really hasn’t changed the way the map looks on the whole, which is exactly what we need for web representation.

florida_combined florida_optimised_combined

In this article I will quickly look at a few easy methods to simplify geometries.

TopoJSON

TopoJSON, developed by Mike Bostock, is an extension of GeoJSON with encoded Topology.

Rather than representing geometries discretely, geometries in TopoJSON files are stitched together from shared line segments called arcs.

This simplifies the structure of the data by identifying the relationships and storing them in the same file, thus eliminating redundancy. TopoJSON works seamlessly with D3.js and can be integrated with pretty much any other web application.

Simplify using QGIS

The QGIS vector processing suite comes with a tool for simplifying geometries. It employs the popular Ramer–Douglas–Peucker algorithm which reduces the number of points in a curve. You have to select the layer that you want to simplify and pick a tolerance level. The higher the tolerance, the lesser the number of points and the lower the size of the file.

qgis_simplify

PostGIS ST_Simplify

In case you are serving spatial data from a PostgreSQL database through an API to the client-side, PostGIS implements the previously mentioned Ramer-Douglas-Peucker algorithm through the procedure called ST_Simplify. For example, to apply ST_Simplify on a geometry called ‘state’ of id 1, with a tolerance of 0.002 from a table called ‘country, the PostGIS command would be:

SELECT ST_Simplify(state, 0.002) from country where id=1;

These techniques are very essential when you deal with large amounts of spatial data that are required to be rendered in the browser. If you have more ideas or questions, let us know in the comments!

 

 

As a run-up to the Do-Din event in Hyderabad, geohackers.in is co-hosting an event called DataLore about putting data to good use, and how statistics and visualisations sometimes twist data to tell lies, this Wednesday, November 20th at 7 High Street Cooke Town, Bangalore.

People who want to make the world a better place look towards data in an effort to make that change. This very data then needs to be channeled into maps, statistics, and visualizations before it can be useful — and people are doing this everywhere. Stories of politics, corruption, oppression, and war are being told around the world using such tools. Unfortunately, a lot of what is being made fails at its task.  Maps that miss the point, visualizations that fail to engage, and statistics that mislead, all undermine action. On Wednesday evening, as a run-up to Do-Din, DataLore will attack this problem on two fronts:

You can’t just throw a map at a problem

Sajjad Anwar

When all you have is a hammer, everything starts to look like a nail. There are maps being made for every reason but some of them lack the point, they misrepresent information, they lie or they fail to engage the audience. We would like to discuss how people come up with these maps, what disasters they cause and how, as storytellers, we can improve the situation.

Nothing is what it seems — especially not statistics

The Ballot

 As they say, there’s lies, damned lies, and then there’s statistics. It’s easy to mislead or be misled by statistics and visualizations. Preconceptions and agendas can leak into them, and colour them with bias. Sometimes, a lack of knowledge about statistics leads to false conclusions, which is rather disastrous. We’ll use some examples to show you how this can happen, and how to both interpret and represent data properly.

I’ve re-entered the academic world as a student at the University of Cambridge in the United Kingdom, and one of the benefits I’m enjoying the most is near-unlimited access to one of the world’s largest repositories of recorded information; the Cambridge University Library. Commonly known as the UL, this is a copyright library which means that under British rules on legal deposit, the library has the right to request a copy of any work published in the UK free of charge. Currently, the UL has over 8 million items, which includes books, periodicals, magazines and of course, maps.

 

The Map Room in the UL is a fascinating place; it functions as the reading room for the Map Department, which holds over a million maps (as the librarian told me; Wikipedia claims it has 1.5 million). It’s not a very large room, as reading rooms go, but is a beautiful space and is very well managed. Everything is catalogued very efficiently with a filing card system, and there’s one card with the name, date of publication and classmark (UID/coordinates) for each map.  Visitors are not allowed to simply browse through the map collections; to refer to a map, one must fill out a request form with the appropriate details and submit this form to the library assistants, who will then pull out the required map folio from its storage location. The title of this post comes from the fact that  map holdings with classmarks beginning with ‘S696′, ‘Maps’ or ‘Atlases’ are held in the Map Room, in various drawers and cabinets.

 

The Map Room is a pen-free zone; if you’re writing something, use a pencil. Smartphones and hand-held cameras are allowed, but under UL policy photos cannot be taken of the building itself. With prior permission however, it is possible to take images of material in the UL, which I did. The first series is from a map on display in the UL; titled “A map containing the towns villages gentlemen’s houses roads river and other remarks for 20 miles around London“, it was printed for a William Knight in 1710 and is a wonderful piece of cartography. The second series is from a map I requested using the card-index system; this map dates back to 1949 and beautifully illustrates tea-growing regions in the Indian-subcontinent.

 

If there’s a map in the UL you want an image of (for non-commercial or private-study purposes only!), I’d be happy to do what I can to help; I would actually be very grateful for an excuse to spend an afternoon looking at maps.

IMG_6037
Detail from Knight, W. (1710). North Arrow.

Continue reading

We often find ourselves choosing between various data formats while dealing with spatial data. Consider this (not-so) hypothetical example: your data collection department passed on a bunch of KML files but your analysts insist on SHP files and your web team is very particular about their GeoJSON. If this sounds familiar, you’re reading the right post; we will quickly run through some of the popular vector and raster data formats you should care about and discuss some of the ways to convert data between these formats.

Vector

Shapefiles

The shapefile is perhaps the most popular spatial data format, introduced by Esri.

It is developed and regulated by Esri as a (mostly) open specification for data interoperability among Esri and other GIS software products. – Wikipedia

Esri still has the right to change the format when and if they choose to do so, it is otherwise open and is highly interoperable. Shapefiles can store all the commonly used spatial geometries (points, lines, polygons) along with the attributes to describe these features. Unlike other vector formats, a shapefile comes as a set of three or more files – the mandatory .shp, .shx, .dbf and the optional .prj file The .shp file holds the actual geometries, the .shx is an index which allows you to ‘seek’ the features in the shapefile, the .dbf file stores the attributes and the .prj file specifies the projection the geometries are stored in.
Continue reading

Note: Much apologies for skipping the September issue of Monthly Maps. To compensate, here’s a double issue filled with fantastic cartographies.

Guernica Magazine has published an excerpt of an interview with Denis Wood, iconic critical cartographer, from his last book titled “Everything Sings: Maps for a Narrative Atlas“. Let us begin this double issue with Wood’s penetrating analysis of what maps do:

 

Denis Wood: Maps are just nude pictures of reality, so they don’t look like arguments. They look like “Oh my god, that’s the real world.” That’s one of the places where they get their kick-ass authority. Because we’re all raised in this culture of: if you want to know what a word means, go to the dictionary; if you want to know what the longest river in the world is, look it up in an encyclopedia; if you want to know where some place is, go to an atlas. These are all reference works and they speak “the truth.” When you realize in the end that they’re all arguments, you realize this is the way culture gets reproduced. Little kids go to these things and learn these things and take them on, and they take them on as “this is the way the world is.”

The fabulous neogeographers at the Oxford Internet Institute used Alexa data to identify the most visited websites in each country, and mapped it as an old colonial style choropleth map of ‘Internet empires’. Do not miss another map included in the same page, which uses hexagonal cartograms to qualify the most-visited websites in each country by the population of Internet users in the same country.

oxford internet institute - age of internet empires

Continue reading

This tutorial is a proof of concept to use an HTML5 slider to control the opacity of a Leaflet map layer. If you want more information about setting up Leaflet and adding different layers, read the documentation.

We will start by adding two layers:

map.addLayer(stamen);
map.addLayer(mapquest);

And a slider:

<input id="slide" type="range" min="0" max="1" step="0.1" value="0.5" onchange="updateOpacity(this.value)">

The slider will invoke the function updateOpacity  when it is moved, which sets the opacity of the layer:

function updateOpacity(value) {
    mapquest.setOpacity(value);
}


If we want to change the opacity of the stamen layer, that’s possible too:

function updateOpacity(value) {
    stamen.setOpacity(value);
}

The code for the above example is here.