Separate names with a comma.
Discussion in 'User Submitted News' started by shakirmoledina, Mar 11, 2012.
Really amazing video demonstration.Skip to around 0:52.
P.S - u can test it with ur iphone or ipad
guess whats gonna be on the next xbox
Looking cool. Not sure if it has a real impact on gaming per se since the lag can be compensated via code (to a level of not noticing much difference). In the drawing part I can really see the advantage. So basically this is a new screen type? Wonder how long it'll take to be reasonably priced
I tested this with a 3DS and letterbox and the latency feels to be somewhere near
hey . What lag?
EDIT: just watched the video, man that was cool. Never thought iphone4 lags.
Coming soon, Windows 8 touch certified monitors.
Yeah, Mr.Microsoft Guy, but you are forgetting that even with decreased latency, the line will be drawn only as fast as the display can refresh itself. Today's standards of mobile devices are either 30 or 60 frames per second - even if you have a latency of 1ms (the position of the touchpoint is calculated 1000 times a second) you are still going to refresh the screen just 60 times a second, creating another kind of lag - the CPU refreshing the position of the touchpoint faster then the display is capable to signify it.
then probably the 'way' they discovered involves a tablet with higher refreshing speeds? just a speculation.
That is correct, but to achieve this kind of precision one would have to refresh at the rate of 1000 FPS, which is fun and dandy when all you display is a white square but not so much when you have to render a 3D scene etc.
1000 fps? sheesh thats just overkill. And to think i hadnt even realised there was any lag in the iphone4's screen
It's not just overkill - it's pointless. The human eye cannot percieve anything over aprox. 72 FPS, so why refresh faster then that?
I can see what they're doing - they're trying to register more waypoints as the touchpoint moves and display their collection at the next possible frame to create a better, more precise aproximation of the route your finger traveled and that can indeed be done, but I would like to see it working in an actual rendered environment rather then just as a white box floating around - then we'll see if their new method, whatever it is, is usable in today's systems.
1. The human eye is not limited to arbitrary numbers like "72 FPS". The way we see things is perceived dynamically and differently from each other, and our eyes compensate in different ways depending on what we are seeing. LINK
2. Just because the display can't refresh faster than 60Hz doesn't mean that there won't be less lag. The reason we see lag is because of INPUT lag, not because of refresh lag.
In this test, for demonstration purposes, a finger is moving across a single axis at a rate of 1 unit/millisecond. The lag, therefore, would be 1 unit offset per millisecond of lag, per output update. This is observable in figure A. Using a 50ms lag display, there is a 50 unit offset on what is happening and what is actually displayed. On a 1ms lag display, there is a 1 unit offset on what is happening and what is displayed.
Figure A: I destroy your argument with math.
As you can see, this technique of removing input lag is mostly unaffected by display refresh rate. The output has dramatically improved when using even the same display! The results of output would be much smoother with a greater refresh rate, of course, but it does not have to be increased to see the fruits of this technological advance.
Very nicely put. I was in the process of replying to Foxi4 saying something along these lines but you've explained it much better (thus I deleted what I just wrote).
That's a very nice technology they've been developing, now people (artists) will be able to draw properly on a tablet without all that lag.
72 is an aproximation. It's obvious that people percieve the world differently, they have *different sets of eyes*. It also depends on the circumstances, level of tiredness and so on and so forth.
Figure B: Reading is hard.
The way you argued your point created the impression that accuracy would not drastically improve in the same display with this new setup, as portrayed in the following quote:
The FPS of the program does not need to be raised! It doesn't matter whether it's a white box or a graphic-heavy 3D game, the lag would be massively reduced and the display greatly improved. This is because you only need to actually grab the input data when using it in a frame.
Because of the way the technology is used and implemented, it is unneccessary to see it in a rendered environment to verify if it is actually useful - it will be an improvement (in responsiveness and accuracy) no matter what!
I suppose you are correct in that regard, I didn't think of it that way. I was just wondering whether it would be wasteful or not if the latency is way, way faster then what the system is actually rendering and how it would impact the program in a real-life situation, that's why I would like to see it implemented in a system that would be familiar to see the difference somewhat first-hand. Afterall, after implementing it you would be reading a whole lot more of values that most of the time (when not drawing or working in applications that require precision pointing) aren't even necessary.
Obviously it's better if the touchscreen responds faster.
EDIT: It's hard for me to put in words what I have in mind so I'll elaborate.
When drawing you require numerous waypoints that will be connected, so in this case I can see how it will improve responsiveness. In moving an object though, the object will be moved only 60 or so times (for the human eye, of course - it will be moving constantly in the background, it's just not displayed), once each cycle, to the corresponding position. 940 registered positions are eliminated as useless unless you program a degree of an offset yourself. There, that sounds better.
I agree; there would be much superfluous information, but it's just a result of how the technology works. I don't see this as a bad thing, per se, but instead as a window into newer opportunities. Because there's more information involved anyways, should frame rates in programs and displays increase, there is still accurate input data to rely on.
Yeah, you're right. I was just focusing on lower-spec systems on which a whole lot of unused variables would not necessarily be welcome, such as handheld consoles. I mean, 1000 refreshes a second introduces 2000 integers just for touchscreen input and it is unlikely that a developer would use all of them. This is obviously a big advancement, it's just "not for every platform" per-say.
A simple answer to this would be introducing a way to ask the touchscreen for input data manually depending on the circumstances and put it to sleep when input is not needed - then a latency of 1ms would be a godsend - input data exactly when you need it stored in a handful of variables (ammount depends on the offset - could be even two if you don't want offset at all).
Hmm. I don't really know how the input protocols work, so I'm going to take some information from this page and hope that it applies to the situation. I'm imagining, depending on how things actually work, it would be able to compensate for this. For input-initialized data, it would probably be sent in a packets continuously, and the host would decide what to use (e.g. the most recent coordinates during a frame). If it was host-initialized requests, it would probably just have the device send over the most recent data when needed (e.g. once per frame). In these ways, only the data that is needed would be used, and the rest would either be discarded or wouldn't be sent in the first place.
Again, I don't know how this all works, so I can only imagine it works in one of those two ways.
Yeah, I'm guessing it just refreshes two values and you save whichever ones you want using some sort of an interval... unless Microsoft prepares a stupid driver for it that registers way too much info trying to suit everyone's needs.