Update June 2019: idea on sequencing and replay mentioned here: https://blog.sumbera.com/2019/06/28/motion-sequence-and-replay-in-dynamic-maps/
****
With some delay, but before the year ends, I have to wrap up my presentation from GIS Hackathon March/2017 in Brno called Geo Visual GPU Analytics . It is available here in CZ : https://www.slideshare.net/sumbera/geo-vizualni-gpu-analytika . There are more pictures than text, so here I will try to add some comments to the slides.
slide 3,4: credits to my source of inspiration -Victor Bret, Oblivion GFX, Nick Qi Zhu.
slide 5: this is a snippet from my “journey log” (working diary), I keep every working day a short memo what I did, or anything significant that happen. It serves to several purposes, for example in this case I have gave up on trying WebGL , spent one /two days on other subject and then returned to the problem – and viola, I could resolve the problem. Everyday counts, it helps to keep discipline and learn from past entries. Getting to know WebGL opened really ‘New Horizons” of GPU computing universe.
slide 7: “better bird in the hand than a pigeon on the roof ” (English equivalent is : A bird in the hand is worth two in the bush’ ). This proverb is put into the context of edge vs cloud computing on slide 9. In the hands – this is the edge , in the roof – this is the cloud. So I believe that what users can hold in their hand, or wear or experience ‘nearby’ ‘is better’ (or more exciting) than what exist somewhere far away (despite its better parameters).
slide 8 : We have same term for tool and instrument in the Czech – ‘nastroj’ so the question is musical instrument or just instrument (aka tool)? This goes to the whole topic of latency in user interaction, described for instance here. I tend to compare the right approach with musical instrument where tight feedback loop happens between the player and the musical instrument. The instrument must respond in less then 10 ms to tighten the feedback loop so the player can feel this instrument as his own ‘body’ and forget on ‘mechanics’ rather flow on the expressiveness of the feelings for what he is interpreting or improvising. (right picture credit here) Why not to have such tools in visual analytics ? Why we need to wait for response from the server if the same task can be done quite well on the edge ? mGL library for GPU powered visualization on web or ImpactIN for iOS using Apple Pencil reflects this principle. We have real-time rendering, we need human-sense-time interaction and bloated abstraction of current software stack do not help here despite of the advance in the hardware – nice write up about latency problem here …and as a side note there are computers types with very low latency – check any synthesizer or digital instrument where latency from user interaction must be very low, hence the left picture on that slide represents them (combination of MIDI pad + Guitar).
Here is a short video form the Korg Monologue synth on something used from 70’s , I consider this type of low-latency feedback-loop applied to new domains fascinating subject to explore. Notice real-time filter modification.
slide 9,10: nice chart from 2012 from britesnow.com on cyclic nature of server vs client processing. I stated there that Innovation happens on client (on edge) as servers(clouds, frames) can do always anything and everything. Exaggerated and related to the slide 7 described above. Workstations, PC, Smartphones (1st iPhone), AR/VR devices, wearables in general etc… it is always about efficiency in used space. Interestingly NVIDIA GPU Gems states similar on chip level.
slide 11: GPU chart over-performing CPU in conjunction with video resolution.
slide 12: Most tricky slide called ironically “Find 10 differences”. On left side is the program I did in 1993, in DOS, on right the one I did using WebGL in 2016. Both examples are great achievements, the right side does GPU-based filtering (or marketingly in-memory) with low user latency so it redraws immediately as user filters by his mouse pointing on brush selector. The left was created in DOS era where each graphics card has its own way of mode switching and that app could utilize maximum of the graphic card using 640×480 resolution with 256 colors ! that was something that time. However something is wrong in trying to find 10 differences as they are basically so similar, both using monitor, keyboard/mouse, and layout….
slide 13: last slide titled “Find 1 difference”is the answer on the dilemma from slide 12 – the AR experience, new way of interaction, new type of the device for new workflows, visual analytic, exploration etc. For one example of many possibilities of AR, here is a nice video from HxGN live 2017: