So far the commands of the mGIS control panel were not accessible for a blind user. Only standard Java buttons were made accessible by adding mouse motion listener on them. But other than standard Java buttons, there were couple of ArcGIS built-in commands, which can't be customized to add mouse listener.
I was searching web and found one article that says: "JAWS is the only popular screen reader that works with Java applications....Allow users to press the “F10” key to move focus to the Java application window menu bar (“File,” “Edit,” “View”) at the top of the application window. When a menu bar menu is open, allow users to press the right arrow and left arrow keys to move between and open adjacent menu bar menus."
http://www.lawrence-najjar.com/papers/Accessible_Java_application_user_interface_design_guidelines.html
That makes me more interested about Jaws and menu-bars of ArcGIS. Now I am implementing a simple program that uses ArcGIS menu-bar for ArcGIS built-in commands and standard Java MenuBar. If this this program works well then I will integrate this with our current version of mGIS.
Fortunately now Jaws can enunciate the names of ArcGIS commands whenever the mouse is hovering on the menu items. I am also playing a little bit with Java Swing Borderlayout to implement corner based popup menus as suggested by our Geography partners..
Tuesday, August 31, 2010
Wednesday, August 25, 2010
Map description with the startup of soundscape feature
For giving the user an overview of the map we liked to introduce a map description feature. This feature requires a "Read_me.txt" file stored in the same directory as the map (.mxd file). The "Read Me" file contains a couple of sentences about the map, describing an overview idea. Whenever the "identify" button has been clicked, before starting identifying map's elements this will start reading out the contents of "Read me" file. This should be a good start with the soundscape feature for an audience.
For implementing this feature I got some issues. If there are more than one line (which is obvious for an overview) the program reads out only the last line. It is because of the time difference between instruction loading and word pronunciation. It obviously takes longer to enunciate. So what I did, I intentionally set delay in loading Read me file contents. Now after loading each line of the Read me file it waits for 5000 ms. By this time it gets enough time to read out the line. Here is a trade off with the delay and the length of the line. If the line cannot be enunciated within this period/delay, it truncated the previous line to read out the next one. For the current contents of the Washington and Yellowstone "Read me" files 5000 ms is a good amount of delay.
For a better solution of reading out multiple lines or paragraph I searched for couple of other text to speech software like Jaws and NaturalReader.
Unfortunately Jaws doesn't read out lines like sentences! I mean, it reads out only up-to a newline character and then waits for the audience to press down-arrow or enter. Although It respects punctuations like comas and full stops so does our current tts engine. But Jaws doesn't read out a paragraph at a stretch that we are doing now.
NaturalReader is a different text-to-speech commercial software from NaturalSoft.
http://www.naturalreaders.com/?gclid=CK37p-iu1aMCFRNSgwodCHuNvw
It loads the text in a different fashion. It requires user to select a portion of the paragraph then start pronunciation.
In the current version of mGIS the program reads a single line from the "Read me" file, enunciate it and then load the next line. As an alternative of that may be we can try loading the whole text at a time then start pronunciation like NaturalReader.
For implementing this feature I got some issues. If there are more than one line (which is obvious for an overview) the program reads out only the last line. It is because of the time difference between instruction loading and word pronunciation. It obviously takes longer to enunciate. So what I did, I intentionally set delay in loading Read me file contents. Now after loading each line of the Read me file it waits for 5000 ms. By this time it gets enough time to read out the line. Here is a trade off with the delay and the length of the line. If the line cannot be enunciated within this period/delay, it truncated the previous line to read out the next one. For the current contents of the Washington and Yellowstone "Read me" files 5000 ms is a good amount of delay.
For a better solution of reading out multiple lines or paragraph I searched for couple of other text to speech software like Jaws and NaturalReader.
Unfortunately Jaws doesn't read out lines like sentences! I mean, it reads out only up-to a newline character and then waits for the audience to press down-arrow or enter. Although It respects punctuations like comas and full stops so does our current tts engine. But Jaws doesn't read out a paragraph at a stretch that we are doing now.
NaturalReader is a different text-to-speech commercial software from NaturalSoft.
http://www.naturalreaders.com/?gclid=CK37p-iu1aMCFRNSgwodCHuNvw
It loads the text in a different fashion. It requires user to select a portion of the paragraph then start pronunciation.
In the current version of mGIS the program reads a single line from the "Read me" file, enunciate it and then load the next line. As an alternative of that may be we can try loading the whole text at a time then start pronunciation like NaturalReader.
Subscribe to:
Comments (Atom)