Wednesday, October 13, 2010

Selection of audio file with component identification

In the current version of mGIS interface, which audio file is to play for a layer of components, is defined inside the program. After the last discussion with our Geography partners, seems like all of the students should have an identical view of a single map data. So individualization of mGIS interface may not be an issue. But if we want the educators able to select audio files for different layers, we should provide some facilities for doing this.

Using style sheet may be an option. I can think of it as a table lookup process. Here I am describing how we can incorporate style sheet:

We already have mouse motion event listener registered with the map components. Whenever the mouse pointer points a pixel, this listener triggers the event and calls the event performed method.

eventPerformed(mouseMotion event){

String layerName = identifyMapLayer(event.getX, event.getY);
String audioFileName = style.Layer(layerName);


}

Here the identifyMapLayer returns the layer name and the style sheet then look up the audio file name for the corresponding layer. We know how to define the identifyMapLayer method. Now we need to find out a way to define a style sheet.

In this way the style sheet will have no dependency on arcGIS components and events. If we think about this from the modular point of view, this style sheet can be built independently without having much integration cost. I hope the educators of mGIS interface will be able to select or change the audio files for different layers.

Sunday, October 10, 2010

Style sheet API thoughts

    I'm looking through Keith Albin's style sheet project (part of his B.S. thesis project) and starting to think about how to turn it into a useful library for the current project and future projects.

    In the version Keith wrote, the interface to a soundscape map is through the map data structure.  That structure, called a "geoshape", was conceptually similar to a shapes in an ArcGIS shapefile  (because ultimately that's where the data came from, though it went through a couple of transformations on its way into the program), but it was a new Java data structure.  The style sheet processor worked as follows:
  •  Parse the style sheet(s).  There could be more than one because it provided "cascading" like CSS, but  somewhat more powerful (e.g., it provided variables with some simple string manipulation, to make style sheets much more compact and modular).
  •  For each geoshape
  •             look up the attributes for the geoshape by class and identifier
  •             attach attributes to the geoshape data structure

   I'm pretty sure that's not the interface we want, because we don't want the GIS software to have to put all the map data into a custom data structure.  I think it will be pretty straightforward to instead allow the GIS software to look up attributes of a shape as needed.   So now I'm thinking about what that API should be.  I'd like to keep it as independent of the particular GIS data structures as possible, but still convenient to use.

   My initial thought is that it might work something like this:
  • Instantiate a style sheet object, giving it a name or search path for a style sheet specification file.  (I.e., the constructor  of a style sheet object will require information on where to find a style sheet).
  •  Further interactions will take place from within event listeners, or code called by event listeners.  If we were drawing the map ourselves, they might also take place from methods that paint the map on the screen.  A typical call might be  something like


  AttributeDescriptor  styles.getAttribute ( 

            eventKind,   // Should this be a string?
            shapeLayerName, // Ditto ... string?  Probably fast enough
            shapeID // String?
                                              );


     The eventKind parameter might be redundant if the AttributeDescriptor is itself a table for looking up particular  attributes, e.g., a simple string -> string table like this.

{ (color: #44c819), (on-entry-sound:  mp3#carsCrashing.mp3),
  (on-exit-sound: midi(some-textual-rep-of-midi-spec)) }


My current thinking, though, is that there isn't much advantage in retrieving all the attributes for a particular object, and then picking out the particular attributes of interest.  It seems that in almost all cases (except for painting the  map on the screen) we need only one attribute from an object, such as "what sound do I play when the cursor  enters the object called 'Yellowstone Lake' in the 'Bodies of Water' layer?"

So what is the type of the thing returned by the style sheet object in answer to that question?  Consider that some types    make sense (e.g., mp3 file and midi spec might make sense for a sound) and others don't (hexadecimal codes for  colors don't make sense for a sound).  A simple answer would be to always return a string, and let the mGIS software be responsible for making sense of the string.  However, besides making the mGIS software a bit more complex, I think there might be some performance issues with that.  For example, if the mGIS software doesn't learn the name of an   mp3 file to play until the cursor reaches an object requiring that sound clip, it cannot preload the mp3 for immediate playback.  This makes me think that there should probably be a somewhat more complex instantiation of the  style-sheet object with some "hooks" to allow much more specific attribute objects to be returned when needed.

Although I haven't thought it through completely, I'm thinking of something like a framework, where the stylesheet software (or a layer just above the core stylesheet software) allows plugging in objects with custom methods for different kinds of interaction.  Think of them as being like listeners --- the mGIS software would provide a set of listener objects when instantiating the style sheet package, and the style sheet package would invoke those listeners when the mGIS software reported an event on a particular object.   So instead of styles.getAttribute above, it might be something like

styles.reactToEvent( eventKind, shapeLayerName, shapeID )

which would find the correct listener in the style sheet internal structure and call it.

Does this make sense? Thoughts on how to make it better or easier to use?

Tuesday, August 31, 2010

Introducing ArcGIS Menu-bar

So far the commands of the mGIS control panel were not accessible for a blind user. Only standard Java buttons were made accessible by adding mouse motion listener on them. But other than standard Java buttons, there were couple of ArcGIS built-in commands, which can't be customized to add mouse listener.

I was searching web and found one article that says: "JAWS is the only popular screen reader that works with Java applications....Allow users to press the “F10” key to move focus to the Java application window menu bar (“File,” “Edit,” “View”) at the top of the application window. When a menu bar menu is open, allow users to press the right arrow and left arrow keys to move between and open adjacent menu bar menus."

http://www.lawrence-najjar.com/papers/Accessible_Java_application_user_interface_design_guidelines.html

That makes me more interested about Jaws and menu-bars of ArcGIS. Now I am implementing a simple program that uses ArcGIS menu-bar for ArcGIS built-in commands and standard Java MenuBar. If this this program works well then I will integrate this with our current version of mGIS.

Fortunately now Jaws can enunciate the names of ArcGIS commands whenever the mouse is hovering on the menu items. I am also playing a little bit with Java Swing Borderlayout to implement corner based popup menus as suggested by our Geography partners..

Wednesday, August 25, 2010

Map description with the startup of soundscape feature

For giving the user an overview of the map we liked to introduce a map description feature. This feature requires a "Read_me.txt" file stored in the same directory as the map (.mxd file). The "Read Me" file contains a couple of sentences about the map, describing an overview idea. Whenever the "identify" button has been clicked, before starting identifying map's elements this will start reading out the contents of "Read me" file. This should be a good start with the soundscape feature for an audience.

For implementing this feature I got some issues. If there are more than one line (which is obvious for an overview) the program reads out only the last line. It is because of the time difference between instruction loading and word pronunciation. It obviously takes longer to enunciate. So what I did, I intentionally set delay in loading Read me file contents. Now after loading each line of the Read me file it waits for 5000 ms. By this time it gets enough time to read out the line. Here is a trade off with the delay and the length of the line. If the line cannot be enunciated within this period/delay, it truncated the previous line to read out the next one. For the current contents of the Washington and Yellowstone "Read me" files 5000 ms is a good amount of delay.

For a better solution of reading out multiple lines or paragraph I searched for couple of other text to speech software like Jaws and NaturalReader.

Unfortunately Jaws doesn't read out lines like sentences! I mean, it reads out only up-to a newline character and then waits for the audience to press down-arrow or enter. Although It respects punctuations like comas and full stops so does our current tts engine. But Jaws doesn't read out a paragraph at a stretch that we are doing now.

NaturalReader is a different text-to-speech commercial software from NaturalSoft.
http://www.naturalreaders.com/?gclid=CK37p-iu1aMCFRNSgwodCHuNvw

It loads the text in a different fashion. It requires user to select a portion of the paragraph then start pronunciation.

In the current version of mGIS the program reads a single line from the "Read me" file, enunciate it and then load the next line. As an alternative of that may be we can try loading the whole text at a time then start pronunciation like NaturalReader.

Monday, July 19, 2010

Wacom tablet screen mapping

As I was using the Wacom tablet without installing the driver, it was working on its default mode. The default mood of using mouse is same as normal mouse and the default mode of using pen is screen mapping. The pen mode screen mapping sets the screen cursor in a way that wherever I put the tool(pen/mouse) the cursor will jump to its corresponding point on screen. This is also known as absolute positioning.

This was the problem we encountered while testing mGIS for the first time with Jake. Now I have downloaded and installed the Wacom Intuos3 driver from the Wacom site.

http://www.wacom.com/downloads/drivers.php

Here is the link for Wacom Intuos3 manual:

http://www.mannlib.cornell.edu/files/documents/Wacon_PTZ630_UsersManual.pdf

Now the tablet is working really nice with the pen/mouse with screen mapping mood.

Friday, July 16, 2010

Meeting in Seattle

Our first meeting with Jake Cook in Seattle was a clear milestone for mGIS project indeed. Jake brought to light couple of things on which we didn't pay attention before.

1. Tablet-pen seems more preferable rather than mouse. Although we could not use the tablet in a tablet mood. Jake compared the pen as the cane of a blind person.

2. Alerting user when the mouse is going outside the map panel (ex. at the edge of the screen, on the menu barn etc.) was an issue. Using Jaws may help us.

3. Even the basic map of Yellowstone park seemed much more complicated for Jake. It can be of several reasons:
a. Jake didn't have an overall idea of the map.
b. The map has couple of buffered rivers (13 rivers) with a number of bends. He felt better when we zoomed in and he found only one or two wide rivers on screen. We need to think about the simplification of the rivers. Increasing the amount of buffer can help us partially.
c. We didn't set up a task-list for Jake. May be Amy could help us doing so. Jake was wandering what he needed to do. He was trying to understand the direction of a river or which way it flows. For one river he found the correct direction.

4. We also tested the multi-touch tablet. Although this one was not big enough for the screen it worked well for Jake. He was using his right hand index finger.

Thursday, June 24, 2010

Headsets or not?

In our Monday meeting, we briefly discussed direction and distance of sound cues.  These are tied to the use of headphones, and indirectly to Jake's idea about voice input:

There are different levels of "proximity" information we could provide:
  • Distance only (no direction) - just with loudness.  We could scale loudness based on a realistic function (and we should at least find out what that is), or we could use an artificial function to either increase or decrease the range at which proximate regions become audible.  This is further broken into two sub-cases: 
    • Continuous distance scaling.  If the loudness of an object differs as a continuous function of distance, then one can judge direction by moving the mouse (maybe ... this may be difficult). 
    • Discrete scaling.  The simplest version of this is to have an extra buffer around each object, in which its sound is audible but less loud.  With discrete scaling, mouse motion does not reveal the direction to an object unless the boundary of a surrounding buffer is crossed. 
  • Directional.  The most accurate directional audio requires headphones and computed with an individual head-related transfer function ( HRTF ), but some left-right directionality can be achieved using a generic model and even with ordinary stereo speakers.  We are better at sensing direction of high-pitched sounds than lower pitches (which is why a surround-sound system has more tweeters and mid-range drivers than woofers, and why high-end stereo systems often use a single sub-woofer).   High quality directional audio is a good deal more computationally intensive than simply varying loudness, but directional audio is supported program libraries (including for Java) because it is used in some games. 
Amy noted that she has never seen a blind person wearing headphones, and she conjectured that blind people might find them objectionable because they "close off" the wearer. 

The indirect link between this and Jake's idea about audio input is that the most accurate audio input comes from headset microphones.  The built-in microphones in most computers are poor quality and/or have a problem with noise from the computer.  It should be possible to provide a good-quality microphone that is not part of a headset, but it could be a challenge to keep it well-positioned relative to a blind user.

Friday, June 4, 2010

Notes on Tuesday 02.06.2010 meeting

Just a few notes before I forget, from our Tuesday 2 June meeting:

We discussed different kinds of sound signals, which might include
  • entering a region (e.g., the buffer around a river).  For example, this could be a short artificial sound.
  • moving or remaining within a region (e.g., following the river).  For example, this could be some naturalistic or mnemonic sound (like the sound of a river).
  • query or hovering in a region (some kind of additional information, probably as speech)
Jim M. brought up the hierarchy of visual symbolism in graphical maps.  This seems to suggest that multiple sound signals could be active at the same time (foreground, background), something we have not considered thus far.

While the discussion of sound symbology got people thinking, it is an issue that we have not explored systematically yet.

Andrew brought up more complex orienting signals about current location or direction.  For example, it is easy to imagine directional signals that would differentiate, e.g., between moving upstream and downstream on a river.  (But how we would implement this, including recognizing which direction is upstream, is not obvious.) 


Amy suggested that the map lines (rivers) needed to be much simpler even than they are now, just a few basic arcs, to be easier to follow.  Would the standard (Douglas-Peucker) line simplification be suitable for this?  Jake thinks not; probably manual simplification will be necessary.  Splining might help but probably not enough to make a big difference.

The Map Publisher program came up, but I have forgotten why ... was it for line simplification?  For manual modification of a map?

Jake suggested voice input, especially for orientation tasks ("where am I?", "what is this?").  Apparently there is a Google or Garmin application that uses it in a mapping application.  Since we don't want the user to have to move a hand from the mouse or other pointing device, voice input might be a useful input modality.

Orientation:  What sort of "where am I?" information is needed, and in what form?  Jim M. thought that users may be getting more  used to longitude and lattitude as positional information because of widespread use of GPS.  Another approach is relative position: "200 yards north of the Erb Memorial Union."  We really don't know what is most useful at this point, and it might be task-dependent.

Amy suggested the developers exploring the map with everything displayed in white (i.e., invisible) to get a more realistic sense of what works and doesn't in navigation.

Two examples of spatial reasoning / exploration came up in the meeting:
  • Display includes base map of US, cities with population indicated, roads.  Explore roads and cities.  How are they related?  (Bigger cities have denser networks of roads.) 
  • Yellowstone: What river is nearest Old Faithful?  Are there areas of Yellowstone without geysers?

Thoughts:
  • Perhaps we should experiment with radical Douglas-Peucker line simplification, to see what happens when a river is simplified down to just a handful of points, and also experiment with spline versus straight line segment representation (if ArcGIS supports splines). 
  • We can start building a foundation for different kinds of sounds on different kinds of events (entering and leaving a region, moving or pausing within a region, etc)
  • A simple way to explore the "white screen" version of a soundscape map or mGIS is to cover or turn away the screen.

Tuesday, April 13, 2010

Introducing Buffer Operation

Still we had problems with the identification of rivers of yellowstone park map or any poly-line. Because poly-line doesn't have significant width to hover the mouse. It will be harder for a blind user to find out a river or stay on a river.

To solve this problem we need to implement buffering. Buffered rivers have significant width to hover on. That means we need to buffer a certain area along the river shore. Here is a reference link for more information about buffering:

http://webhelp.esri.com/arcgisdesktop/9.2/index.cfm?TopicName=Buffer_%28Analysis%29

The sample code I found from ESRI resources, deals only with a selected feature. Here feature means a geometric shape (as: line, polyline, polygon, point etc.). So there is a feature selector user needs to enable first. Then the user needs to click on desired feature on the map and then click on the "do bufer" button. This button opens a new window with a slider to select a buffering distance.After pressing "ok" button, the selected feature along with the distance selected is covered by crossing line. This is called the buffered area.

This kind of buffering could not serve our purpose. Because one river is consisted of many poly-lines. So buffering a poly-line means buffering a portion of the river. Whether we wanted to buffer all the rivers at a time.

So we needed to write a new method for buffering. This method iterates though all the layers of the map and searches for poly-lines. If it finds a layer of poly-line, it starts buffering with a specified distance. This method is invoked only by pressing a button. There is no feature selection necessary. It continues buffering until all the poly-lines of the layer are buffered. The required time of buffering depends on the number of poly-lines. As for example the "usa.mxd" has 679 poly-lines for its "usHighways" layers and it takes 25 seconds. And the "yellowStone.mxd" has 1737 poly-lines for its "major-rivers" layer and it takes 55 seconds.

Here I am assuming that each layer has only one type of geometric feature. This method is much straight forward and requires less user involvement. Although the speed of buffering is still an issue.

Friday, April 2, 2010

Identifying with audio response

As mouse move is a more frequent event than the audio response, audio response should occur only when the mouse crosses a map component boundary. So I changed the identify method so that the audio response event listener performs only when the cursor encounters new map component.

Now the speaker enunciate "yellow stone lake" when the cursor first time enters into yellow stone lake area.

Monday, February 15, 2010

Indentify map cpmponent with mouse move

Now the identify method can determine the name of a map component while clicking. The next task is to add this functionality with mouse move. So I add another event handler mouseMotionListener with the Mapcontrol. Whenever the mouse pointer is moved it invokes the same identify method to determine where the pointer is right now.

As the mouse move event is more frequent than mouse click, the audio seems slower. So for now I stopped the audio response.

Tuesday, February 9, 2010

Incorporating sound with map component

Opening a new window on click will not be an efficient way of mouse move event. So I changed the way of showing information table of the 'identify' toolbar control. Right now we are interested only with determining the place name. So, now if the mouse has been clicked some where inside 'Oregon', only the text 'Oregon' is printed out in the console panel. Thus the mouse pointer identifies where it is pointing right now.

The next step is to incorporate sound with mouse event. I used java 'com.sun.speech.freetts' libraries to incorporate sound. This is a free 'Text to speech' library. Right now if the mouse has been clicked somewhere in 'Oregon', the speaker also enunciate the word 'Oregon'!!!

OnMouseMove and toolbar control 'identify'

ArcGIS engine library provides us with the event handler OnMouseMove that deals with mouse motion listener. Now I can extract the mapX and mapY of the point where the mouse has been moved.

The ArcGIS engine divides the Jpanel into three beans:
1. Topmost: toolbar
2. Leftmost: Table of Contents (TOC bean)
3. other: Map View.

I found an important toolbar control 'identify'. If this control is active and I click on any point on the Map view, it opens a new window. The new window contains a table of information related with the map-point as extracted from the .mxd file. The view of these information can be customized by selecting either "All layer" or "Top-most Layer".

Suppose if the mouse has been clicked on somewhere inside 'California' of the US map, the table shows State_name, population, area, households etc. of CA.

Thursday, January 28, 2010

Working with ArcGIS Engine event handler

Java Visual Class facilitates a programmer specially while working with Jpanel and Java beans. Now the visualization of the map is more customized to me. I can add or delete toolbars or maps anytime.

Java visual editor has a special library 'ArcGIS Component'. 'MapBean', 'TOCBean', 'Toolbarbean' are some of these components. Now I can drag and drop these beans to the visual editor. Also can change some of their properties.

But in the case of adding event handler I found that ArcObjects are unable to deal directly with Java trivial libraries. They can access only the methods from Esri ArcGIS Engine library. Thats why I cannot use java event handlers (addActionListeners or addActionPerformed) with ArcObjects.

Fortunately ArcGIS Engine library has a some event handler named as IMapControlEvents. Now I am using this event handler for mouse click and arrowkeys. My MapControl returns (mapX, mapY) and (screenX, ScreenY) of the point mouse clicked. It also can be zoom out or zoom in with mouse scroll and Keyboard up/down arrowkeys.

Monday, January 25, 2010

First program with ArcGIS Engine developer kit

The basic three steps of programming with ArcGIS Engine developer kit for JAVA

1: Initialize the Java Component Object Model (COM).
2: Initialize a valid License. (Need to locate the "Esri license product code" file)
3: Create visual components for the mapviewer.

It loads a .mxd file in 'Mapbean' (created using java bean) which is a class from ArcObjects library. And the 'loadMxFile' method of this class loads the entire data and represents a graphical view or map in a JFrame.

There are sample data folder given inside /java/samples/data/mxds/ directory. I tested world.mxd. It works fine showing the whole world map.

Thursday, January 21, 2010

Installing ArcGIS Engine plug-in for Eclipse

The SDK version of my computer is 1.6 and Eclipse version is 'Galileo'.
SDK is ok for ArcGIS but they don't have any instruction for my 'Galileo' Eclipse...

Here is the link for ArcGIS Eclipse plug-in installation:
http://resources.esri.com/help/9.3/ArcGISServer/adf/java/help/doc/6c7a7b84-5168-4843-9536-34e5ef2ec424.htm#About

Here is another one:
http://edndoc.esri.com/arcobjects/9.2/Java/java/engine/ide_integration/eclipse/EclipseInstall.html

For my Eclipse, I needed to find out an the extra plug-in Visual-Editor that helped me installing ArcGIS plug-in. I think ArcGIS has a dependency on Eclipse Visual-Editor. Here is a link for Visual editor plug-in:
http://www.rcp-vision.com/index.php?option=com_content&view=article&id=81%3Aeclipse-visual-editor-di-nuovo-operativo&catid=40%3Atutorialeclipse&Itemid=28&lang=en

At last I could run successfully a few sample ArcGIS Engine programs...

Wednesday, January 20, 2010

Installing ArcGIS Engine developer kit

I got the DVD of ArcGIS Engine software and ESRI authorization file from Jake. The installation procedure was not so easy as I thought :(

1. Install the ArcGIS Engine Runtime for Windows
2. Locate the authorization file as asked.
3. Install the ArcGIS Engine SDK for JAVA
4. Install the ArcGIS Help Sytem for JAVA
5. Follow the ArcGIS pre 9.3 GDB direct connect Installation guide for post installation configuration

Searching for onMouseOver event

My searching of Resources.esri.com says that ArcEngine handles with some of the mouse events. Here is the definition of these methods according to ArcEngine developer tool kit.

http://edndoc.esri.com/arcobjects/9.2/Java/api/arcobjects/com/esri/arcgis/schematic/INgProjectTool.html#mouseMove%28com.esri.arcgis.schematic.INgView,%20int,%20int,%20int,%20int,%20int,%20int,%20double,%20double%29

The following link describes some parameters of a cell (I need to know what does a 'cell' mean). It also describes how to clip parameters with a cell.

http://resources.esri.com/help/9.3/ArcGISEngine/java/Gp_ToolRef/using_geoprocessing_tools/parameter_status_colors_and_messages.htm

I could meet Jake this morning. I got a copy of ArcGIS Engine software from him (I also got a returnable DVD from McKengie Hall). Now I need to install this to my computer.

E-mail from Michal

This is partly to record my own memories before they fade, and partly to communicate them ...

ArcEngine is distinct from ArcObjects. ArcObjects has a COM interface; ArcEngine is a framework with Java and .net APIs. With ArcEngine we get a lot of basic functionality for writing a GIS by modifying or augmenting what is already there in the framework. In that way, for example, we can deal with ESRI Shapefiles (the dominant data format in GIS) as well as other data formats.

Resources.esri.com is the best source for documentation in the APIs as well as introductory information.

Unknown: Does ArcEngine produce events we can hook into, as one would use an OnMouseOver handler in Javascript? If it does, then hooking into their existing graphical display and adding handlers to produce sound could be by far the easiest way to build a soundscape interface. That would, for example, take care of projecting map coordinates of a shapefile into display coordinates, with no extra programming.

If it doesn't provide events like OnMouseOver, it might still be possible (but more cumbersome) to use that framework as long as it can report current coordinates of the mouse as map coordinates (translating back through the map projection, so that it would be relatively simple to keep our own data structure to determine which shapes were under those coordinates.

(A couple of these things Jake mentioned and I just remembered as I was typing.)

Tuesday, January 19, 2010

Testing blog posting.....