Category Archives: WebGL

Motion sequence and replay in smart maps

moitonseq.jpgWhen dynamic maps were in early stages of development or even concepts, I was thinking on ‘replay’ function – sort of step-sequencing based on certain dimension. Some of these ideas have been presented  in the “Incident Prototype” in 2014 here at  2:24 where you can see sequencing over the day or month period.

Another demonstration took place in Green Space Analyzer in 2015 when introducing Smart M.Apps at 2:04 when years are sequenced.

Important part of the idea was the editing functionality – or how the sequence and capture ‘user motion’ in general. In multidimensional data there are nearly unlimited number of combinations and ways how data are filtered over the time. However instead of continuous time sequencing, we can simplify to ‘step-sequencing’ that is  well known in music industry – many digital  instruments offer this, specially rhythm, drum or groove machines. Geospatial industry could take a lot from looking on how this function, core to music production, is done there and get inspiration for smart  mapping.

For example Korg Minilogue xd captures  ‘user motion’ of the filters that can be defined for each of the 16 steps

Why it is important – imagine story telling where someone will show how to filter out data to get certain interesting results – he will then save his choices of filters over the time (in form of motion sequence) so other people can load it and ‘replay’ it and further tweak it.

I believe this might be super-useful for show-case of  evolving of some phenomena, or how someone (aka expert) got into certain results.

Or it might be useful as ‘Smart Video’ of maps – so instead of publishing dump video – you publish a dynamic map plus ‘motion sequence’ of when and what is filtered (including map manipulations like zooms)

Yeah this is like what everyone knows from Whether forecasts – replay or forecast of cloud/pressure/wind  motion, however instead of video, you get real smart map – the one that is replaying its state  based on the sequencer data.

State of state 

Keeping current state of the filters modified by user  is the first step.

While lot of user interaction already  happens on the client side, most of the front -leading solutions do not keep a state of the selected filters so refreshing URL doesn’t  keep what user has previously selected – in short all user UI state is lost. May be this is ‘feature’ to reset state not inconsistency, then you need to make another button that explicitly gives you link to the current state…

In iKatastr.cz I keep the state in URL – this makes it easy than to go forward/backward of what user has previously selected (here parcel).  Moreover it enables direct link sharing and one button less in UI (for sharing link) – so what user sets in UI on the page is reflected in the URL as fragment (starting by #) so any time user can grab the URL and pass it by SMS or email – this plays well on iOS/Android  with default ‘share url’ function of the browsers. So try this link:  https://ikatastr.cz/#kde=49.31166,17.75353,18&mapa=zakladni&vrstvy=parcelybudovy&info=49.31142,17.75423

and try to refresh it – you will see the same state as before the refresh. You can also try to select other parcels and then press back button in the browser. You can experience back/forward ‘sequencing’ manual sequencing fo your previous user interactions. This is most likely the way to go and further evolve into full step-sequencing.

GEO Visual GPU Analytics notes

Update June 2019:  idea on sequencing and replay  mentioned here: https://blog.sumbera.com/2019/06/28/motion-sequence-and-replay-in-dynamic-maps/

****

With some delay, but before the year ends, I have to wrap up my presentation from GIS Hackathon March/2017 in Brno called Geo Visual GPU Analytics . It is available here in CZ : https://www.slideshare.net/sumbera/geo-vizualni-gpu-analytika  .  There are more pictures than text, so here I will try to add some comments to the slides.

slide 3,4: credits to my source of inspiration -Victor Bret, Oblivion GFX, Nick Qi Zhu.

slide 5: this is a snippet from my “journey log” (working diary), I keep every working day a short memo what I did, or anything significant that happen. It serves to several purposes, for example in this case I have gave up on trying WebGL , spent one /two days on other subject and then returned to the problem – and viola, I could resolve the problem.  Everyday counts, it helps to keep discipline and learn from past entries. Getting to know WebGL opened really ‘New Horizons” of GPU computing universe.

slide 7: “better bird in the hand than a pigeon on the roof  ” (English equivalent is : A bird in the hand is worth two in the bush’ ). This proverb is put into the context of edge vs cloud computing on slide 9.  In the hands – this is the edge , in the roof – this is the cloud.  So I believe that what users can hold in their hand, or wear or experience ‘nearby’ ‘is better’ (or more exciting)  than what exist somewhere far away (despite its better parameters).

slide 8 : We have same term for tool and instrument in the Czech – ‘nastroj’  so the question is musical instrument or just instrument (aka tool)? This goes to the whole topic of latency in user interaction, described for instance here. I tend to compare the right approach with musical instrument where tight feedback loop happens between the player and the musical instrument. The instrument must respond in less then 10 ms to tighten the feedback loop so the player can feel this instrument as his own ‘body’ and forget on ‘mechanics’ rather flow on the expressiveness of the feelings for what he is interpreting or improvising.  (right picture credit here) Why not to have such tools in visual analytics ? Why we need to wait for response from the server if the same task can be done quite  well on the edge ? mGL library for GPU powered visualization on web  or ImpactIN for iOS using Apple Pencil  reflects this principle. We have real-time rendering, we need human-sense-time interaction and bloated abstraction of current software stack do not help here despite of the advance in the hardware –  nice write up about latency problem here   …and as a side note there are computers types with very low latency – check any synthesizer or digital instrument where latency from user interaction must be very low, hence the left picture  on that slide represents them (combination of MIDI pad + Guitar).

Here is a short video form the Korg Monologue synth  on something used from 70’s , I consider this type of low-latency feedback-loop applied to new domains fascinating subject to explore. Notice real-time filter modification.

slide 9,10: nice chart from 2012 from britesnow.com    on cyclic nature of server vs client processing.  I stated there that Innovation happens on client (on edge) as servers(clouds, frames)  can do always anything and everything. Exaggerated and related to the slide 7 described above.  Workstations, PC, Smartphones (1st iPhone), AR/VR devices, wearables in general etc… it is always about efficiency in used space. Interestingly NVIDIA GPU Gems states similar on chip level.

slide 11: GPU chart over-performing CPU in conjunction with video resolution.

slide 12: Most tricky slide called ironically “Find 10 differences”. On left side is the program I did in 1993, in DOS, on right  the one I did using WebGL in 2016. Both examples are great achievements, the right side does GPU-based filtering (or marketingly in-memory)  with low user latency so it redraws immediately as user filters by his mouse pointing on brush selector.  The left was created in DOS era where each graphics card has its own way of mode switching  and that app could utilize maximum of the graphic card using 640×480 resolution with 256 colors ! that was something that time. However something is wrong in trying to find 10 differences as they are basically so similar, both using monitor, keyboard/mouse, and layout….

slide 13:  last slide titled “Find 1 difference”is the answer on the dilemma from slide 12  – the AR experience, new way of interaction, new type of the device for new workflows, visual analytic, exploration etc.  For one example of many possibilities of AR, here is a nice video from HxGN live 2017:

 

WMS overlay with MapBox-gl-js 0.5.2

alt text

Quick and dirty test of the WMS capabilities of the new MapBox-gl-js 0.5.2 API. First of all, yes ! it is possible to overlay (legacy) WMS over the vector WebGL rendered base map … however the way is not straightforward:

 

  • Needs some ‘hacks’ as current version of the API doesn’t have enough events to supply custom URL before it is loaded. But check latest version of mapbox, it might have better support for this.
  • Another issue is that WMS server has to provide HTTP header with Access-Control-Allow-Origin:* to avoid WebGL CORS failure when loading image (gl.texImage2D). Usually WMS servers don’t care about this, as for normal img tags CORS doesn’t apply. Here WebGL has access to raw image data so WMS provider has to explicitly agree with this.
  • Build process of mapbox-gl-js tend to be as many other large js projects complicated, slow, complex. And specifically on Windows platform it is more difficult to get mapbox-gl-js install and build running then on Mac.

Code is documented to guide you through the process, few highlights:


 // -- rutine originaly found in GlobalMercator.js, simplified
 // -- calculates spherical mercator coordinates from tile coordinates
 function tileBounds(tx, ty, zoom, tileSize) {
    function pixelsToMeters(px, py, zoom) {
     var res = (2 * Math.PI * 6378137 / 256) / Math.pow(2, zoom),
         originShift = 2 * Math.PI * 6378137 / 2,
         x = px * res - originShift,
         y = py * res - originShift;
     return [Math.abs(x), Math.abs(y)];
     };
   var min = pixelsToMeters(tx * tileSize, ty * tileSize, zoom),
         max = pixelsToMeters((tx + 1) * tileSize, (ty + 1) * tileSize, zoom);
return min.concat(max);
}

 
]

// -- save orig _loadTile function so we can call it later
 // -- there was no good pre-load event at mapbox API to get hooked and patch url
// -- we need to use undocumented _loadTile
 var origFunc = sourceObj._loadTile;
    // -- replace _loadTile with own implementation
 sourceObj._loadTile = function (id) {
    // -- we have to patch sourceObj.url, dirty !
    // -- we basically change url on the fly with correct BBOX coordinates
    // -- and leave rest on original _loadTile processing
     var origUrl =sourceObj.tiles[0]
                      .substring(0,sourceObj.tiles[0].indexOf('&BBOX'));
     var origUrl = origUrl +"&BBOX={mleft},{mbottom},{mright},{mtop}";
     sourceObj.tiles[0] = patchUrl(id, [origUrl]);
     // -- call original method
     return  origFunc.call(sourceObj, id);
 }

 

 

gist available here

demo here (Chrome): https://www.sumbera.com/gist/js/mapbox/index.html

Modern data visualization on map

hxgn14 For this year HxGN14  conference  I have prepared a web app  of modern data vizualisation, I have got  inspired by great ideas from Victor Bret and his research and talks for general concept (high interactivity, visualization ) of this app.

It is exciting to see what is possible to do today inside browser and interactivity provided by various open source projects (e.g. leaflet,d3  and its plugins)  and WebGL technology .