Well lately I’ve been in the process of developing and designing an electro-optical method of deploying Multitouch to LCD panels, as opposed to the traditional IR Cam version of MT. An electro-optical unit has significant benefits when compared to the use of the traditional IR Cam based systems. Firstly it reduced the form factor required, when using the traditional IR we are limited to having a large form factor, which is dependent on viewing angle used on the IR Camera. However if we employ an electro-optical device it reduces that form factor significantly.

The electro-optical device that I’m referring to is a custom designed IR sensor matrix that connects to the the back of an LCD and it provides a high resolution, high frame rate Input capturing solution. Essentially the electro-optical unit eliminates the need for a IR cam. In much of a similar fashion to that made by Microsoft (thinsight). Where my solution differs is that Thinsight employs DI technique where it uses proximity sensors which contain both an IR emitter and IR detector, there by effectively halving their resolution. What i aim to do is employ DSI or a combination of FTIR and LLP to eliminate the need for IR emitter on sensor matrix. Thus doubling the number of sensors.

So basically below are a few rendering of the setup im trying to achieve.

  1. SchellSchell12-31-2008

    What is this electro-optical device? Is the technology available commercially already, and just needs to be integrated into an array, or is this sensor a speculation? What are the physical properties of this sensor? One could take an array of ccd or cmos chips and accomplish this, at a very expensive cost, are you proposing something similar?

    • TahaTaha01-01-2009

      This device is not commercially available yet, but it may be in the future… I’m currently in the process of designing it, electro-optical is a term coined my Microsoft in there thinsight project, essentially its an electronic hardware device that has an optical component to it. A high density version could theoretically use CCD’s but i don’t plan to use them, mainly cause you don’t need to any finger or object placed on the screen, would have a minimum 0.3~0.5cm so density as high in CCD’s or CMOS is not needed in an array format, and say even if you do use them, computation required for an array greater than 2×2 would be enormous. I’m at the moment testing proximity sensors and independent IR photodetectors, and placing them in arrays, effectively reducing the resolution and density. I hope this answers questions.

  2. SchellSchell01-02-2009

    Yes, answers some and raises others. If you reduce the resolution, wouldn’t you effectively restrict the the input’s movement to quantized positions? I could imagine that these sensors detect a range of intensity. If so, in order to re-create a smooth dragging action caused by a finger dragged across the screen, wouldn’t you have to interpolate between two sensor’s locations, weighing each sensor’s intensity to guess the actual point of the finger? I understand that a finger’s touch width is only ~.3 to .5 cm, but that .3cm could land anywhere on the board, and will move in infinitesimally small increments across the screen. The fingers shouldn’t be confined to just .3cm^2 bound areas. Interpolation is the only method I can think of to avoid this scenario.

    So, what about using a 2D array of LEDs? This might help with backlighting if you can use visible light LEDs to light your LCD, as well as track your fingers. In your current drawings I don’t see any backlighting, LED or CCFL. The placement of your backlighting could be tricky. In front of the sensor array the lights block input, behind the sensor array the lights are blocked, leaving you with a dark screen. Thin form factor multitouch sounds like a very sticky situation.

    • TahaTaha01-02-2009

      Its Interesting that you brought up input movement being restricted to quantized positions, i was thinking along the same lines as you, when i initially started doing research. There are 3 key things when selecting sensors, to eliminate the need to interpolate; FOV, Size of input blobs, and FPS. The sensors I’ve tested so far have a lens on top of them much like a LED, with 10deg and 15deg lenses. As such they have a rounded FOV with Gaussian effect. As such you are right in assuming that the sensors detect a range of intensity, i.e. Brightest at the middle and diminishing in intensity as you move towards the edge. As a result if you have a number of of these sensors connected in an array you will get regions with some what diminished sensory depth. A key thing to note is that even with this diminished sensory depth the entire screen will still be receptive in those areas. To further build up on top of this, you have to take in to account the minimum input blob size of the human finger. We know that the actual size of the sensor is 3mm or 0.3cm and we also know that the minimum size of an human fingers input blob is around 0.8~1cm as such we can easily assume that if when a finger does touch the screen it will interact with a minimum of 2 sensors at all times. This will easily give you an exact location for your input. Finally FPS plays a major role in capturing movement, the higher the frame rate the the more robust your system is to movement. Even if you have high resolution capturing system but if don’t have a fast enough frame rate movement will be choppy, and quantized. Combining all three of these things we can create a system which does not have a necessity to interpolate between sensors.
      I have also looked into 2D LED arrays, but the reason i stayed away from them is because they provide a non haptic, means of detecting an input. As for back lighting in my diagram the LCD refers to everything LCD+diffusion layers+back light. I plan to follow the same procedure as many people have followed in forums where you remove the soft silver reflector diffusion layer. I know this reduces the brightness some what at the moment that inst my major concern.
      And you are correct in assuming this isn’t really a DIY project like many of the other Multitouch solutions available out there, it can get very technical at times even if you know what you’re doing :P

  3. JeffJeff02-24-2009

    I had this exact same idea a couple of months ago. It’s in a thread on the NUI Group forums somewhere. I think that if you were to get something manufactured that just had a ton of IR sensors on a big breadboard and right next to those you had some sort of backlight for your LCD then you would have a perfect way to get a DIY thin multi touch screen. The other way to do it is with capacitive sensors behind the screen. I have seen a demo done with a paper clip where you sense the touch from like an inch away.

    -Just some stuff to think about.

  4. George BirbilisGeorge Birbilis04-14-2010

    Regarding quantized steps, why use a rect grid and not a hex grid? (bee hive like)

    • TahaTaha04-24-2010

      A hexagonal grid is increasing the complexity of the system without really increasing the resolution of the system. Another thing is that the sensors them selves are rectangular, having a hexagonal like structure although via able only adds to the complexity, at the end of the day 1 finger touch will only activate 6-9 sensors at max so having a hex structure doesn’t prove to be advantageous.

  5. hillbillyhillbilly04-26-2010

    ahh, looks a lot like my thin MT setup design on NUIgroup.com…

    I’m so excited to see what comes of this! I hope you continue to pursue the goal of getting a thin MT setup working! I’ll stay tuned! :D