GEO Visual GPU Analytics notes

Update June 2019:  idea on sequencing and replay  mentioned here:


With some delay, but before the year ends, I have to wrap up my presentation from GIS Hackathon March/2017 in Brno called Geo Visual GPU Analytics . It is available here in CZ :  .  There are more pictures than text, so here I will try to add some comments to the slides.

slide 3,4: credits to my source of inspiration -Victor Bret, Oblivion GFX, Nick Qi Zhu.

slide 5: this is a snippet from my “journey log” (working diary), I keep every working day a short memo what I did, or anything significant that happen. It serves to several purposes, for example in this case I have gave up on trying WebGL , spent one /two days on other subject and then returned to the problem – and viola, I could resolve the problem.  Everyday counts, it helps to keep discipline and learn from past entries. Getting to know WebGL opened really ‘New Horizons” of GPU computing universe.

slide 7: “better bird in the hand than a pigeon on the roof  ” (English equivalent is : A bird in the hand is worth two in the bush’ ). This proverb is put into the context of edge vs cloud computing on slide 9.  In the hands – this is the edge , in the roof – this is the cloud.  So I believe that what users can hold in their hand, or wear or experience ‘nearby’ ‘is better’ (or more exciting)  than what exist somewhere far away (despite its better parameters).

slide 8 : We have same term for tool and instrument in the Czech – ‘nastroj’  so the question is musical instrument or just instrument (aka tool)? This goes to the whole topic of latency in user interaction, described for instance here. I tend to compare the right approach with musical instrument where tight feedback loop happens between the player and the musical instrument. The instrument must respond in less then 10 ms to tighten the feedback loop so the player can feel this instrument as his own ‘body’ and forget on ‘mechanics’ rather flow on the expressiveness of the feelings for what he is interpreting or improvising.  (right picture credit here) Why not to have such tools in visual analytics ? Why we need to wait for response from the server if the same task can be done quite  well on the edge ? mGL library for GPU powered visualization on web  or ImpactIN for iOS using Apple Pencil  reflects this principle. We have real-time rendering, we need human-sense-time interaction and bloated abstraction of current software stack do not help here despite of the advance in the hardware –  nice write up about latency problem here   …and as a side note there are computers types with very low latency – check any synthesizer or digital instrument where latency from user interaction must be very low, hence the left picture  on that slide represents them (combination of MIDI pad + Guitar).

Here is a short video form the Korg Monologue synth  on something used from 70’s , I consider this type of low-latency feedback-loop applied to new domains fascinating subject to explore. Notice real-time filter modification.

slide 9,10: nice chart from 2012 from    on cyclic nature of server vs client processing.  I stated there that Innovation happens on client (on edge) as servers(clouds, frames)  can do always anything and everything. Exaggerated and related to the slide 7 described above.  Workstations, PC, Smartphones (1st iPhone), AR/VR devices, wearables in general etc… it is always about efficiency in used space. Interestingly NVIDIA GPU Gems states similar on chip level.

slide 11: GPU chart over-performing CPU in conjunction with video resolution.

slide 12: Most tricky slide called ironically “Find 10 differences”. On left side is the program I did in 1993, in DOS, on right  the one I did using WebGL in 2016. Both examples are great achievements, the right side does GPU-based filtering (or marketingly in-memory)  with low user latency so it redraws immediately as user filters by his mouse pointing on brush selector.  The left was created in DOS era where each graphics card has its own way of mode switching  and that app could utilize maximum of the graphic card using 640×480 resolution with 256 colors ! that was something that time. However something is wrong in trying to find 10 differences as they are basically so similar, both using monitor, keyboard/mouse, and layout….

slide 13:  last slide titled “Find 1 difference”is the answer on the dilemma from slide 12  – the AR experience, new way of interaction, new type of the device for new workflows, visual analytic, exploration etc.  For one example of many possibilities of AR, here is a nice video from HxGN live 2017:


Why you should visit cemeteries

I really like this post from Avdi Grimm called “The Passion Gospel“. This remembers me a first chapter in a book: “The Art Of Thinking Clearly” from Rolf Dobelli:

..”no journalist is interested in failures – with exception of fallen superstars.This makes the cemetery invisible to outsiders. In daily life, because triumph is made more visible than failure, you systematically overestimate your chances of succeeding.”

..”The media is not interested in digging around in the graveyards of the unsuccessful. Nor is this its job.”

I think same  bias applies to blogs, twitters, facebook and press releases of companies or marketing in general. And to this blog too :)

…you should recognize that survivorship bias is at work, distorting the probability of success like cut glass..

..Survivorship bias means this: people systematically overestimate their chances of success. Guard against it by frequently visiting the graves of once-promising projects, investments and careers. It is a sad walk, but one that should clear your mind.”



Czech oblique images

I have put together simple web app that overlays cadastral maps with otrophoto , if you zoom more you will  see bird’s eye high quality images, that you can rotate (left upper corner arrows) it is not possible yet to overlay in the bird’s eye WMS overlay with cadastral info, however I was amazed with the quality of the images covering whole Czech Republic and in better quality than Google (provider called Seznam is local big Google rival) Rendering is not great either as their rendering is proprietary and not of the quality as OpenLayers or Leaflet. Anyway the information coming from oblique images , or let’s say ‘predefined camera positions’ is very useful of understanding spatial situation in the respective area. May be the fact that user doesn’t need to move fully in 3D space but is limited somehow is better than full 3D ‘free form’ , ‘game-like’ navigation – one can be easily lost there. Also I found it equal or even better than information provided by street view data (what google provides) as street view is ‘too close’ and I have found myself quite often lost in the ‘street-view’

page is available here (click on image):

Screen Shot 2014-01-13 at 10.42.43

if you zoom in you will see this :

Screen Shot 2014-01-13 at 10.40.07

Code available here:

Demo here:


VisualStudio 2013 WebEditor enhancements

Why hybrid mobile app developers should start web app first ? – it is more efficient with tooling from Visual Studio rather than ‘partisann-oy-ing ‘  HTML/JS code  in native-mobile-dev IDEs  (X-Code or ADT/Eclipse)

Page inspector:

Web Editor Features:

HTML5 features:

WebEditro-JavaScript features:


iOS7 notes

iOS7 transition guide (UI):

iOS7 std link problem:  found linking problem (only) on iOS7 , solution is to add libsstdc++ manually into the build phase – link as described here:

Fix UI controller layout for iOS6 and iOS7:

// -- hide status bar on iOS7  - add to controller:
 - (BOOL)prefersStatusBarHidden{
 return YES;

-- not iOS7 related but useful: create cell view either from Nib or manually:
 -(CategoryCell*) createCell{
  static NSString *reuseId = @"CategoryCellIdentifier";
  CategoryCell *cell = [self.tableView dequeueReusableCellWithIdentifier:reuseId];
  if (cell == nil){
   cell =  [[UINib nibWithNibName:@"" bundle:nil] instantiateWithOwner:nil options:nil][0];
 //cell = [[CategoryCell alloc] initWithStyle:UITableViewCellStyleSubtitle  reuseIdentifier:reuseId] ;
 return cell;

Notes on native vs. HTML apps

07-Josef-Lada--Detem This is like fox and stork story, deep or shallow, Native or Html. ( btw. the picture here is from Czech painter “Josef Lada”). As I am more ‘stork’ than fox, I have collected few resources for other ‘storks’ out there.

My understanding is that mobile platforms are ‘extension of the human senses’ and this trends continues with wearable computing devices, smaller, tighter in resources. Doesn’t matter what software you write, whether enterprise or consumer it is extension of us, our finger touch, eye view, near real time.

Stork links :