Category Archives: HTML5

Motion sequence and replay in smart maps

moitonseq.jpgWhen dynamic maps were in early stages of development or even concepts, I was thinking on ‘replay’ function – sort of step-sequencing based on certain dimension. Some of these ideas have been presented  in the “Incident Prototype” in 2014 here at  2:24 where you can see sequencing over the day or month period.

Another demonstration took place in Green Space Analyzer in 2015 when introducing Smart M.Apps at 2:04 when years are sequenced.

Important part of the idea was the editing functionality – or how the sequence and capture ‘user motion’ in general. In multidimensional data there are nearly unlimited number of combinations and ways how data are filtered over the time. However instead of continuous time sequencing, we can simplify to ‘step-sequencing’ that is  well known in music industry – many digital  instruments offer this, specially rhythm, drum or groove machines. Geospatial industry could take a lot from looking on how this function, core to music production, is done there and get inspiration for smart  mapping.

For example Korg Minilogue xd captures  ‘user motion’ of the filters that can be defined for each of the 16 steps

Why it is important – imagine story telling where someone will show how to filter out data to get certain interesting results – he will then save his choices of filters over the time (in form of motion sequence) so other people can load it and ‘replay’ it and further tweak it.

I believe this might be super-useful for show-case of  evolving of some phenomena, or how someone (aka expert) got into certain results.

Or it might be useful as ‘Smart Video’ of maps – so instead of publishing dump video – you publish a dynamic map plus ‘motion sequence’ of when and what is filtered (including map manipulations like zooms)

Yeah this is like what everyone knows from Whether forecasts – replay or forecast of cloud/pressure/wind  motion, however instead of video, you get real smart map – the one that is replaying its state  based on the sequencer data.

State of state 

Keeping current state of the filters modified by user  is the first step.

While lot of user interaction already  happens on the client side, most of the front -leading solutions do not keep a state of the selected filters so refreshing URL doesn’t  keep what user has previously selected – in short all user UI state is lost. May be this is ‘feature’ to reset state not inconsistency, then you need to make another button that explicitly gives you link to the current state…

In iKatastr.cz I keep the state in URL – this makes it easy than to go forward/backward of what user has previously selected (here parcel).  Moreover it enables direct link sharing and one button less in UI (for sharing link) – so what user sets in UI on the page is reflected in the URL as fragment (starting by #) so any time user can grab the URL and pass it by SMS or email – this plays well on iOS/Android  with default ‘share url’ function of the browsers. So try this link:  https://ikatastr.cz/#kde=49.31166,17.75353,18&mapa=zakladni&vrstvy=parcelybudovy&info=49.31142,17.75423

and try to refresh it – you will see the same state as before the refresh. You can also try to select other parcels and then press back button in the browser. You can experience back/forward ‘sequencing’ manual sequencing fo your previous user interactions. This is most likely the way to go and further evolve into full step-sequencing.

GEO Visual GPU Analytics notes

Update June 2019:  idea on sequencing and replay  mentioned here: https://blog.sumbera.com/2019/06/28/motion-sequence-and-replay-in-dynamic-maps/

****

With some delay, but before the year ends, I have to wrap up my presentation from GIS Hackathon March/2017 in Brno called Geo Visual GPU Analytics . It is available here in CZ : https://www.slideshare.net/sumbera/geo-vizualni-gpu-analytika  .  There are more pictures than text, so here I will try to add some comments to the slides.

slide 3,4: credits to my source of inspiration -Victor Bret, Oblivion GFX, Nick Qi Zhu.

slide 5: this is a snippet from my “journey log” (working diary), I keep every working day a short memo what I did, or anything significant that happen. It serves to several purposes, for example in this case I have gave up on trying WebGL , spent one /two days on other subject and then returned to the problem – and viola, I could resolve the problem.  Everyday counts, it helps to keep discipline and learn from past entries. Getting to know WebGL opened really ‘New Horizons” of GPU computing universe.

slide 7: “better bird in the hand than a pigeon on the roof  ” (English equivalent is : A bird in the hand is worth two in the bush’ ). This proverb is put into the context of edge vs cloud computing on slide 9.  In the hands – this is the edge , in the roof – this is the cloud.  So I believe that what users can hold in their hand, or wear or experience ‘nearby’ ‘is better’ (or more exciting)  than what exist somewhere far away (despite its better parameters).

slide 8 : We have same term for tool and instrument in the Czech – ‘nastroj’  so the question is musical instrument or just instrument (aka tool)? This goes to the whole topic of latency in user interaction, described for instance here. I tend to compare the right approach with musical instrument where tight feedback loop happens between the player and the musical instrument. The instrument must respond in less then 10 ms to tighten the feedback loop so the player can feel this instrument as his own ‘body’ and forget on ‘mechanics’ rather flow on the expressiveness of the feelings for what he is interpreting or improvising.  (right picture credit here) Why not to have such tools in visual analytics ? Why we need to wait for response from the server if the same task can be done quite  well on the edge ? mGL library for GPU powered visualization on web  or ImpactIN for iOS using Apple Pencil  reflects this principle. We have real-time rendering, we need human-sense-time interaction and bloated abstraction of current software stack do not help here despite of the advance in the hardware –  nice write up about latency problem here   …and as a side note there are computers types with very low latency – check any synthesizer or digital instrument where latency from user interaction must be very low, hence the left picture  on that slide represents them (combination of MIDI pad + Guitar).

Here is a short video form the Korg Monologue synth  on something used from 70’s , I consider this type of low-latency feedback-loop applied to new domains fascinating subject to explore. Notice real-time filter modification.

slide 9,10: nice chart from 2012 from britesnow.com    on cyclic nature of server vs client processing.  I stated there that Innovation happens on client (on edge) as servers(clouds, frames)  can do always anything and everything. Exaggerated and related to the slide 7 described above.  Workstations, PC, Smartphones (1st iPhone), AR/VR devices, wearables in general etc… it is always about efficiency in used space. Interestingly NVIDIA GPU Gems states similar on chip level.

slide 11: GPU chart over-performing CPU in conjunction with video resolution.

slide 12: Most tricky slide called ironically “Find 10 differences”. On left side is the program I did in 1993, in DOS, on right  the one I did using WebGL in 2016. Both examples are great achievements, the right side does GPU-based filtering (or marketingly in-memory)  with low user latency so it redraws immediately as user filters by his mouse pointing on brush selector.  The left was created in DOS era where each graphics card has its own way of mode switching  and that app could utilize maximum of the graphic card using 640×480 resolution with 256 colors ! that was something that time. However something is wrong in trying to find 10 differences as they are basically so similar, both using monitor, keyboard/mouse, and layout….

slide 13:  last slide titled “Find 1 difference”is the answer on the dilemma from slide 12  – the AR experience, new way of interaction, new type of the device for new workflows, visual analytic, exploration etc.  For one example of many possibilities of AR, here is a nice video from HxGN live 2017:

 

GPU accelerated visual analytics

Recent months I had lot of fun working   on WebGL component called “mGL” for visualizing and filtering large amount of data in the browser. It has been used  for Incident Analyzer and Area Analyzer Smart M.Apps. Here are  2 videos of the testing app of the mGL that shows its potential.  Most interesting is the filtering part that takes place in the fragment shader. mGL itself has API that can connect to crossfilter to control filtering  or has adapter to be used with dc.js.

First video shows 400k parcels in Cincinaty  and second   400k road network in North Caroline. Both  with fast cross-filtering on several attributes. You can switch between dimensions represented by charts by clicking on their label. Map (road network) will reflects chart’s color and immediately response to changing filters on either chart or on map.second video shows 400k parcels in Cincinaty with the same behavior.

 

 

SVG fast scaled overlay on Leaflet 1.0 and 0.7

SVGScaled SVG can be drawn on map in much more faster way than traditional approaches, at least for points. Traditional approach re-position each element to fit into the view of the map, however SVG is “scalable” so we can use it and it performs much more faster for zoom-in/out.

Few considerations:

  1. SVG itself define viewport by its coordinate space, all outside of this viewport is usually  clipped, so it is important to keep SVG viewport in-line with the viewport of the map. There are approaches that resizes SVG as you zoom-in (here), and while it works, it has a problems in deep-zooms when you need to move on map (actually you move  giant SVG based on the zoom )
  2. translating LatLon to absolute pixel values (like here used for WebGL) is possible solution, however IE and FF has problems with large numbers for transoform (>1 M), So we need to get SVG elements in view coordinates and translate them.
  3. Having some track of bounding box of all elements like again used here should be avoided (SVG or its group element knows about extension of the elements it holds)
  4. So while we keep SVG in the viewport, we need to compensate any shift and zoom by translating <g> (group) of all elements.
  5. So in leaflet when map  moves, SVG is translated back to its original position while <g> is translated forward to reflect the map movement
  6. We need to keep track of LatLon position of either map center or one of the corner – we use topLeft corner.
  7. Leaflet doesn’t do  precise enlargement and rounds view points because of some CSS troubles on some devices (noted here). We need to patch two translating functions in Leaflet to get this right (so SVG enlargement will be aligned with map)… but I need to look on this again, best would be to not patch Leaflet of course.

most important things happen in moveEnd event:

 


 var bounds = this._map.getBounds(); // -- latLng bounds of map viewport
 var topLeftLatLng = new L.LatLng(bounds.getNorth(), bounds.getWest()); // -- topLeft corner of the viewport
 var topLeftLayerPoint = this._map.latLngToLayerPoint(topLeftLatLng); // -- translating to view coord
 var lastLeftLayerPoint = this._map.latLngToLayerPoint(this._lastTopLeftlatLng); 

 var zoom = this._map.getZoom();
 var scaleDelta = this._map.getZoomScale(zoom, this._lastZoom); // -- amount of scale from previous state e.g. 0.5 or 2
 var scaleDiff = this.getScaleDiff(zoom); // -- diff of how far we are from initial scale 

 this._lastZoom = zoom; // -- we need to keep track of last zoom
 var delta = lastLeftLayerPoint.subtract(topLeftLayerPoint); // -- get incremental delta in view coord

 this._lastTopLeftlatLng = topLeftLatLng; // -- we need to keep track of last top left corner, with this we do not need to track center of enlargement
 L.DomUtil.setPosition(this._svg, topLeftLayerPoint); // -- reset svg to keep it inside map viewport

 this._shift._multiplyBy(scaleDelta)._add(delta); // -- compute new relative shift from initial position
 // -- set group element to compensate for svg translation, and scale</pre>
 this._g.setAttribute("transform", "translate(" + this._shift.x + "," + this._shift.y + ") scale(" + scaleDiff + ")");

Test page / Gist : http://bl.ocks.org/Sumbera/7e8e57368175a1433791

To better illustrate movement of SVG inside the map, here is a small diagram of basic SVG states:svgpositioning

Smart M.App

  Recent months I have been programming  “Green Space Analyzer” web app that shows modern  approach to visualize and  query   multi temporal geospatial data.  User see information in a form he can interact with and discover new patterns, phenomena or information just by very fast ‘feed-back’ of the UI response on the user input.  When user selects for example certain area, all graphs instantly animates transition to reflect selection made, this helps to  better  understand  dynamics of the change. Animation can be seen everywhere – from labels on bar chart, through colors change of the choropleth up to title summary. it creates subtle feeling of control or knowing what has changed and how it has changed. At HxGN 15 conference in   hexagon geospatial keynote, CEO Mladen Stojic showcased it as part of the  vision  called Smart M.App, worth to look at (at 52:40 starts Smart M.App demo):

 

my Smart M.App  ‘world tour’:

 

 

 

geojson-vt on leaflet

update Sept 2015: nice explanation of how geojson-vt works here

Mapbox technologies used in their webgl and opengl libraries are being extracted into standalone pieces. Vladimir Agafonkin,creator of leaflet.js, earcut.js provided slicing and polygon simplification library for geojson called Goejson-vt.Geojson-vt can slice geosjon into tiles aka mapbox tiles.
Quick test and sample of using geojson-vt on leaflet with canvas drawing available here: http://bl.ocks.org/sumbera/c67e5551b21c68dc8299

2 videos:

Overview of various geojson samples

 

folowing video shows 280 MB large geojson !

 

for comparison here is WebGL version on the same data. This version is loading all data into GPU and leaves everything on WebGL (no optimization). It also takes slightly more time to tessellate all polygons, but once done all seems to run fine. Code used is available here

WMS overlay with MapBox-gl-js 0.5.2

alt textQuick and dirty test of the WMS capabilities of the new MapBox-gl-js 0.5.2 API. First of all, yes ! it is possible to overlay (legacy) WMS over the vector WebGL rendered base map … however the way is not straightforward:

 

  • Needs some ‘hacks’ as current version of the API doesn’t have enough events to supply custom URL before it is loaded. But check latest version of mapbox, it might have better support for this.
  • Another issue is that WMS server has to provide HTTP header with Access-Control-Allow-Origin:* to avoid WebGL CORS failure when loading image (gl.texImage2D). Usually WMS servers don’t care about this, as for normal img tags CORS doesn’t apply. Here WebGL has access to raw image data so WMS provider has to explicitly agree with this.
  • Build process of mapbox-gl-js tend to be as many other large js projects complicated, slow, complex. And specifically on Windows platform it is more difficult to get mapbox-gl-js install and build running then on Mac.

Code is documented to guide you through the process, few highlights:


 // -- rutine originaly found in GlobalMercator.js, simplified
 // -- calculates spherical mercator coordinates from tile coordinates
 function tileBounds(tx, ty, zoom, tileSize) {
    function pixelsToMeters(px, py, zoom) {
     var res = (2 * Math.PI * 6378137 / 256) / Math.pow(2, zoom),
         originShift = 2 * Math.PI * 6378137 / 2,
         x = px * res - originShift,
         y = py * res - originShift;
     return [Math.abs(x), Math.abs(y)];
     };
   var min = pixelsToMeters(tx * tileSize, ty * tileSize, zoom),
         max = pixelsToMeters((tx + 1) * tileSize, (ty + 1) * tileSize, zoom);
return min.concat(max);
}

 
]

// -- save orig _loadTile function so we can call it later
 // -- there was no good pre-load event at mapbox API to get hooked and patch url
// -- we need to use undocumented _loadTile
 var origFunc = sourceObj._loadTile;
    // -- replace _loadTile with own implementation
 sourceObj._loadTile = function (id) {
    // -- we have to patch sourceObj.url, dirty !
    // -- we basically change url on the fly with correct BBOX coordinates
    // -- and leave rest on original _loadTile processing
     var origUrl =sourceObj.tiles[0]
                      .substring(0,sourceObj.tiles[0].indexOf('&amp;BBOX'));
     var origUrl = origUrl +"&amp;BBOX={mleft},{mbottom},{mright},{mtop}";
     sourceObj.tiles[0] = patchUrl(id, [origUrl]);
     // -- call original method
     return  origFunc.call(sourceObj, id);
 }

 

 

gist available here

WebGL polyline tessellation with MapBox-GL-JS

update 09/20015 : test of tesspathy.js library here . Other sources to look:

  1.  http://mattdesl.svbtle.com/drawing-lines-is-hard
  2.  https://github.com/mattdesl/extrude-polyline
  3. https://forum.libcinder.org/topic/smooth-thick-lines-using-geometry-shader

*** original post ***

This post attempted to use pixi.js tessellation of the polyline, this time let’s look on how mapbox-gl-js can do this. In short much more better than pixi.js.

it took slightly more time to get the right routines from mapbox-gl-js and find-out where the tessellation is calculated and drawn. It is actually on two places – in LinBucket.js  and in line shader. FireFox shader editor helped a lot to simplify and extract needed calculations and bring it into the JavaScript (for simplification, note however that shader based approach is the right one as you can influence dynamically thickness of lines, while having precaluclated mesh means each time you need to change thickness of line you have to recalculate whol e mesh and update buffers )

 

// — module require mockups so we can use orig files unmodified
 module = {};
 reqMap = {
‘./elementgroups.js’: ‘ElementGroups’,
‘./buffer.js’ : ‘Buffer’
};
require = function (jsFile) { return eval(reqMap[jsFile]); };

 

   &lt;!-- all mapbox dependency for tesselation of the polyline --&gt;
 &lt;script src=&quot;http://www.sumbera.com/gist/js/mapbox/pointGeometry.js&quot;&gt;&lt;/script&gt;
    &lt;script src=&quot;http://www.sumbera.com/gist/js/mapbox/buffer.js&quot;&gt;&lt;/script&gt;
    &lt;script src=&quot;http://www.sumbera.com/gist/js/mapbox/linevertexbuffer.js&quot;&gt;&lt;/script&gt;
    &lt;script src=&quot;http://www.sumbera.com/gist/js/mapbox/lineelementbuffer.js&quot;&gt;&lt;/script&gt;
    &lt;script src=&quot;http://www.sumbera.com/gist/js/mapbox/elementgroups.js&quot;&gt;&lt;/script&gt;
    &lt;script src=&quot;http://www.sumbera.com/gist/js/mapbox/linebucket.js&quot;&gt;&lt;/script&gt;
    &lt;script src=&quot;http://www.sumbera.com/gist/data/route.js&quot; charset=&quot;utf-8&quot;&gt;&lt;/script&gt;
 // -- we don't use these buffers, override them later, just set them for addLine func
 var bucket = new LineBucket({}, {
 lineVertex: (LineVertexBuffer.prototype.defaultLength = 16, new LineVertexBuffer()),
 lineElement: (LineElementBuffer.prototype.defaultLength = 16, new LineElementBuffer())
 });

var u_linewidth = { x: 0.00015 };
// override .add to get calculated points
LineVertexBuffer.prototype.add = function (point, extrude, tx, ty, linesofar) {
    point.x = point.x + (u_linewidth.x * LineVertexBuffer.extrudeScale * extrude.x * 0.015873);
    point.y = point.y + (u_linewidth.x * LineVertexBuffer.extrudeScale * extrude.y * 0.015873);
    verts.push( point.x, point.y);
    return this.index;
};

// — pass vertexes into the addLine func that will calculate points
bucket.addLine(rawVerts,“miter”,“butt”,2,1);

prototype  code posted here

 

WebGL polyline tessellation with pixi.js

update 09/2015  : another triangulation methods (mapbox, tesspathy) mentioned here

pixi.js is a 2D open source library for gaming that includes WebGL support for primitives rendering. Why not to utilize it for polyline renderings on map ? It turned out, however, that the  tesselation of the polylines is not handled well.

most important code snippets:

<script src="Pixi.js"></script>
<script src="Point.js"></script>
<script src="WebGLGraphics.js"></script>

//--data
<script src="route.js" charset="utf-8"></script>

var graphicsData = {
 points: verts,
 lineWidth: 0.00015,
 lineColor: 0x33FF00,
 lineAlpha: 0.8
};
var webGLData = {
   points: [],
   indices: []
 };
 // -- from pixi/utils
 PIXI.hex2rgb = function (hex) {
   return [(hex >> 16 & 0xFF) / 255,
           (hex >> 8 & 0xFF) / 255,
           (hex & 0xFF) / 255];
  };

PIXI.WebGLGraphics.buildLine(graphicsData, webGLData);

I have put sample here:

Another implementaiton of polyline tessellation (seems like more functional) is in mapbox-gl-js  in LineBucket  .Mapbox-gl-js code took quite more time to get it running and debug on Windows platform,I  had to run npm install  from VS command shell and read carefully what all the npm errors are saying (e.g. Python version should be < 3). Then FireFox for some reason haven’t triggered breakpoint on LineBucket.addLine, this took another time to find out that I should debug thiOstravaRailwayss rather in Chrome.   See the blog here.Anyway good  experience with all the messy npm modules, their install requirements and unnecessary complexity. Also all the npm modules takes more than 200 MB, but some of them are optional in the install.

After all basic LINE draw in WebGL (without the thicknes and styling) is useful too, as on picture above you can see railways in CZ city Ostrava.

 

WebGL polygons fill with libtess.js

kraje

Update 1.6.2015: geojson-vt seems to do great job in tiling and simplifying polygons. Check this post.

Update 18.1.2015: Vladimir Agafonkin from MapBox released earcut.js – very fast and reliable triangulation library. Worth to check. Video available here:

 

 

Original post:

Brendan Kenny from Google showed  here how he made polygons using libtess.js on Google Maps, so I have tried that too with single large enough polygon on Leaflet with CZ districts.  libtess.js is port from C code . Neither plntri.js (update: see also comments for plntri v2.0 details)  nor PolyK.js were able to triangulate large set  of points as libtess.js.

Update:  I looked on poly2tri.js  too with following results:

I could run 2256 polygons (all together > 3M vertexes)  with poly2tri  16 701 ms  vs 127 834 ms (libtess), however I  had to dirty fix  other various errors from poly2tri (null triangles or “FLIP failed due to missing triangle…so some polygons were wrong..), while  libtess was fine for the  same data.

Here is  a test :  3 M vertexes with 1 M triangles have been by generated by libtess in 127s . poly2tri took 16s.  Drawing is still fine but it is ‘just enough’ for WebGL too.

 

 

key part is listed below:


tessy.gluTessNormal(0, 0, 1);
tessy.gluTessBeginPolygon(verts);

tessy.gluTessBeginContour();

//--see blog comment below on using Array.map&lt;/span&gt;&lt;/strong&gt;
data.features[0].geometry.coordinates[0].map(function (d, i) {
pixel = LatLongToPixelXY(d[1], d[0],0);
var coords = [pixel.x, pixel.y, 0];
tessy.gluTessVertex(coords, coords);
});

tessy.gluTessEndContour();
// finish polygon (and time triangulation process)
tessy.gluTessEndPolygon();

code available here: http://bl.ocks.org/sumbera/01cbd44a77b4283e6dcd

 

There is also EMSCRIPTEN version of the tesslib.c available on github, and I was curious whether this version would increase speed of  computation. I could run it but for large polygons (cca 120 verts of CZ boundary) I had to increase module memory to 64 MB for FireFox.  Tessellata 120T verts in  FF-30 took 21s, IE-11, Ch-36: failed  reporting out of stack memory :(

Getting back to version from Brendan  (no emscripten) I quickly measured same data on browsers: IE-11 21s, Ch-36: 31s,  FF-30: 27s .

Update Oct/2014: Polyline tessellation blog here