Joe Levi:
a cross-discipline, multi-dimensional problem solver who thinks outside the box – but within reality™

Project "Vista Tablet": Part 1 (Background)

I have a Fujitsu Lifebook T3010 Tablet PC (convertable) based on Intel Centrino, Pentium M (1.4), and Intel Extreme 2 Graphics running Windows XP Tablet PC Edition (2005) and have been very impressed.

For those who may not know, a Tablet PC is a laptop computer (with or without a keyboard) that can accept “digital ink” via a stylus and a special display. One can use the stylus in several ways:

  • as a pointer (similar to the way you’d use a mouse),
  • as a pen with which to capture free-hand drawings,
  • as a pen with which to utilize hadwriting (or character) recognition,
  • or (my favorite) as a pen with which to capture “digital ink.”

Digital Ink is similar in usability to how one would interact a pen with paper. An “ink enabled” application (MS Word, MS Journal, MS OneNote, MS InfoPath, MSN Messenger, Agilix GoBinder, etc.) is one that allows the user to input “ink,” which appears in the user’s native handwriting. That’s where things get really exciting.

Once the Digital Ink is in the application any number of things can happen, the application can:

  • simply store the digital ink as a “set of scribbles” with no more meta-data or “meaning” than a set of semi-random scratches on a pad of paper,
  • treat the digital ink as a set of “organized scribbles” which it recognizes as a set of basic symbols (squares, rectanges, circles, triangles, ovals, lines, etc.),
  • treat the digital ink as a set of “organized scribbles” which it recognizes as a set of specific symbols (letters, numbers, characters), which it then parses into strings of words, phrases, and sentences.

It’s this latter use of captured digital ink wherein the true potential — and paradigm shift — can occur.

Imagine writing a letter in your own hand-writing. The computer parses your “set of scribbles” into arrays of possible words, phrases, and sentences (any one, or a combination of which may accurately represent your intended data. Here, most people would think the next logical step is utilizing fuzzy-logic to translate the digital ink into plain text.

Why not leave the ink alone, don’t “change” it into plain text. But you can’t search through scribbles other than by eye, right? Right… so instead of having the digital ink transformed immediately (or after a short delay, read: handwriting recognition), have the system store meta-data about what the digital ink could be behind the digital ink objects. For example, digital ink that was written intending to be the word “Hello” could have supporting meta-data of:

  • Hello
  • hello
  • Hell
  • hell
  • jello
  • Helio

Add to that digital ink that was written indenting to be the word “world”, which could have supporting meta-data of:

  • World
  • world
  • wool
  • would
  • whirl

Of course the meta-data is contingent upon your handwritten digital ink, and results may vary. That said, now you can have digital ink (your original, handwritten inked data), and can search on it as well (including the phrase “Hello world” as in the case above).

The paradigm shift then is to recognize and store meta-data from the original ink, while maintaining the original ink… and enabling the usability of your application (or OS) to utilize the digital ink as the native format rather than plain text “recognized” from the ink.

Share

You may also like...

Leave a Reply