Categories
Accessibility Technology

All About Google Chrome

This is my fifth session from the first day at the CSUN conference.  This session covers “…the built-in accessibility features of Chrome, Chrome OS and Chromebooks.”  Description comes from the conference event guide.  I attended Google’s pre-conference seminar in 2013, and it was very informative (my 10-part blog post can be accessed here).  I hope they pack in the juicy details this year too 🙂

Presenters:

  • Dominic Mazzoni, Software Engineer on the Google Team (@)
  • Peter Lundblad, Engineer on the Google Chrome Team (@)
  • David Tseng, Software Engineer Google Chrome Team (@)

 

David Tseng showed off a remote control that comes with ChromeVox built-in.  It’s meant for video conferencing.  David used the tool to join a Google Hangout (a kind of vido call).  It worked well in the demonstration, at least from the perspective of selecting and joining an existing Hangout.

 

Dominic Mazzoni talked briefly about the importance of the web as the world’s largest open platform.  The Chrome browser was originally introduced with the following three principles/priorities in mind:

  • Speed:  re-introduced competition into the browser market
  • Simplicity:  create a browser that doesn’t distract from the content you’re looking at.  Also, updates happen automatically.
  •  Security:  updates resolve holes asap

Dominic jumped into ChromeOS and showed some of the accessibility features available, including on-screen keyboard, screen magnifier, large mouse cursor, high contrast mode, sticky keys, tap-dragging, and ChromeVox itself.

 

Peter Lundblad demonstrated ChromeVox, a screen reader made especially for ChromeOS.  Support for voices in multiple languages has been recently added; Peter demonstrated this with both German and British female voices.  Refreshable braille device support has also been added to ChromeOS.  This particular demonstration was interesting to me because I’ve never actually seen one of these devices in action.  There is a “growl-like” on-screen display of the braille output so sighted users can see what the braille device itself is showing.  Peter added a bookmark using the braille device.

 

Dominic then took over and talked about synchronized bookmarks (and other settings) that “follow the user” to whatever device they may be using.  He demonstrated this using an Android phone.  The phone he showed the audience successfully showed the bookmark that was set by Peter on the Chromebook a few minutes before.  Dominic then activated the local search control (a circular control with links to phone functions) by swiping up and to the right to activate the link.

Dominic then demonstrated the ChromeCast, which lets you “cast” content from any Chrome browser to a display the Chromecast is plugged into.  Laura Palmero shared her personal experience using the ChromeCast.  Laura is a person with a vision disability that makes it difficult for her to view things in the center of her field of view, so she relies on high-contrast displays that are close to her (like her phone).  This has made it much easier for her to interact with her large screen television at home…she now controls it using her phone, which she uses all the time.

 

Question:  what about the accessibility of Google Docs?  There is a Google Docs session tomorrow (Thursday) that goes into great detail about Google Docs.

Question:  what is the strategy with ChromeBook?  It seems like just an interesting toy.  Answer:  it’s not a general-purpose computing device that’s meant to replace all computers.  It’s a device that’s made to work with the web,

Question:  what tools are you providing so developers can have access to things like view source, that sort of thing?  Answer:  we know we have some work to do with this, but there are workarounds.  Please speak with us after the session.

Question:  how well does it support ARIA?  Answer:  we make extensive use of ARIA in our web apps, and we rely on open standards and participate in working groups.

Categories
Accessibility Technology

Scaling Web Accessibility at Facebook

This is my fourth session from the first day at the CSUN conference.  This session “…covers Facebook’s work over the past year to scale web/mobile accessibility across the company’s large engineering department.”  Description comes from the conference event guide.

Presenters:

  • Jeffrey Wieland
  • Ramya Sethuraman
  • George Zamfir (@good_wally)

 

RESOURCES

 

BACKGROUND:  how to scale accessibility in a large engineering environment.

  • Complexity:  Each platform has different considerations
  • Awareness:  products need to know what do for accessibility
  • Speed:  need to integrate accessibility into the process

 

JEFFREY’S SEGMENT

Accessibility team came into existence by recognizing that users were using AT to mediate their relationship with the product.  Jeffrey appealed to user interface engineering (UIE), which is the front-end team that builds all the core components of the product.  These components are similar to the design pattern library work that LinkedIn is doing.

Unfortunately, most Computer Science graduates do not have much exposure to accessibility.  So, accessibility has been integrated into the core training regimen at Facebook.  If it’s a part of the core training, then it sends a message to the developers that it’s important.

Testing matters, so we’ve invested in something called an “accessibility nub” which is essentially a flyout menu (built in-house) that allows developers to toggle looking for best practices.

Centralizing documentation and best practices has helped engineering review things “in-context.”  Contextual links to this resource have been embedded wherever needed.

These steps have give us the ability to “have more hands on deck” with respect to accessibility.  This has grow the number of developers working on accessibility fixes to over 80(!)

A number of ambassadors have been enlisted to help evangelize accessibility internally.  We also have channels by which we communicate with our users (see resources section above)

 

RAMYA’S SEGMENT

Ramya began her segment by describing the alt text issue she had posting a picture of her one year-old daughter trying yogurt for the first time (very cute!).

Caption generator:  takes bits of metadata about uploaded photos and auto-generates a caption for the user.  Ramya demonstrated how this sounds with VoiceOver, both before and after using the caption generator.  Addition of  metadata elements like location photo was taken was very effective!

Semantic Structure has been added via headings and landmarks.

The core components library contains controls like buttons, links, images, etc.  Accessibility is built directly into these components.  Dialogs now have keyboard enhancements, with appropriate labeling.  Focus cycles through dialogs.

Keyboard Shortcuts:

  • j/k keys are used for moving focus forward and backward, respectively.
  • “c” key is used to comment on a post
  • “s” key is used to share the post,
  • “o” key is used to open attachments like photos
  • “q” key to chat.

High contrast mode is also available now.

A lot of effort was put into making the desktop view accessible.

 

GEORGE’S SEGMENT

Quality Assurance:  all is done with scale in mind.  It all started with a spread sheet, and testing was done in an ad-hoc fashion by a very small accessibility team.  In order to scale it, it had to be spread to the entire team!

We now run standardized regression tests on a regular basis for each platform.  We also do user testing with people who have disabilities.

QA (test run) > ProdOps (triage & assignment) > Eng (improvements)

Where does the A11Y team fit into the above?  It fits in wherever it makes sense.  Across products & platforms, and runs on auto-pilot.

“If you build the product, accessibility is YOUR responsibility.” It’s just another form of code quality.

 

JEFFREY’S SEGMENT

Mainstreaming accessibility is something that we want to pursue at all levels.  One of their front-end engineers was working on the web messenger product, and when asked if he’d tested with a screen reader, his response was “what’s a screen reader?”  This is not his fault, because he was not exposed to accessibility during his education.  So, Facebook is now partnering with PayPal, Stanford engineering to get students to think about accessibility.  This will help to build awareness.

 

Q & A SEGMENT

Question:  how much of the data associated with the photo example presented earlier is auto-generated versus user-supplied?  Answer:  it has be user-entered content.

Question:  how are you testing for high-contrast mode?  Answer:  well, it’s complicated…(I didn’t catch all of the answer).

Question:  are the testing links you talked about generalizable for use by public testers?  Answer:  not really, but we’re working on it.

Question:  how do you track focus when using keyboard shortcuts?  Answer:  we return the currently active element.

Question:  have you been able to document whether or how the accessibility features have been implemented?  This is a big challenge, quantifying the impact your work has made.  We’re doing a lot around measurement, which helps improve where we focus our efforts.  We do read all of the feedback we receive, both positive and negative…please be candid with us!

Question:  where does the role of the engineer start and end?  Where does design fit in…how do you get accessibility baked in?  Answer:  we’re still defining how this works Facebook.  Some things engineers should absolutely be involved with, notably focus and readback.  Things get trickier when building more dynamic and collaborative tools.  George indicated that their designers were ready to roll straight into implementation and pretty much ate up everything he gave them.

Question:  do you need to activate keyboard shortcuts somewhere in the user preferences?  Answer:  no!

Comment:  I wanted to mention that I submitted a JavaScript-related accessibility bug recently and got a response THE SAME DAY.   Very great high-touch service (this got some applause).

 

Categories
Accessibility Technology Uncategorized

The (not so) Surprising Parallels Between Responsive Design and Accessibility

This is my third session from the first day at the CSUN conference.  This session is hosted by my friend George Zamfir, who I met at this conference last year.  The session guide describes George’s session like so:  “Responsive design has borrowed principles & best practices from accessible design.  Learn about both and how to apply them to your projects.”

Presenter:  George Zamfir (@good_wally)

 

RESOURCES

 

In this post, I’m going to dispense with my normal slide-by-slide narrative structure.  George’s presentation moved way too fast and had lots of builds. 😉

 

George discovered that responsive design was a great way to build accessibility into his projects.  He showed us some of his previous work on the Scotiabank web site.  This ended up being TWO projects:  first for the desktop version of the site, then the mobile responsive version of the site.  He also worked on the mobile version of the bank’s credit card application.

 

What do all assistive technologies have in common?

  • They don’t care much about design, and they care to change it for the user (a lot like RSS readers)
  • Content trumps design, regardless of screen size
  • RWD is not about the design, it’s about updating the design to bring out the content

 

 

Visual, Auditory, Mobility, Cognitive & Speech.  Don’t measure people through the disability lens – which automatically focuses on what people are NOT able to do.  We now measure disability by what people CAN do.

 

Accessibility is contextual, so we should cater to users’ context.  You’re not necessarily engaging with someone working on a desktop computer with a large monitor, keyboard and mouse anymore.  He referred to a study of how people hold their phones and also the W3C’s BAD (Before After Demo) page.

  • One simple tip:  adding padding around text links increases the “hit size”
  • Keyboard accessibility translates well into touch-friendly interfaces.
  • Use native controls wherever possible.  On the bank side, they used <div> instead of <select> control, which was a problem when they went mobile.

 

Design for the edge cases (mobile first design)

If you start with a small screen, prioritization really matters. A variation of this model is designing for edge cases.  If you design for the harshest conditions first, the in-between cases are much easier to work out.  Consider accessibility as one of your edge cases!

 

RWD is a champion for A11Y, we have common goals for our users.

 

Question:  how do you handle navigation in RWD?  I target the simplest possible device and design progressively.

Question:  Do you do anything special about device orientation changes?  Answer:  why would you change the content?  Perhaps you change the layout, but you should not change the content.

Question:  What is your process when you have the luxury of a “clean sheet” design…how do you handle the lowest common denominator?  I like to start with everything besides the content.  We built the framework, and the content just fits into that framework.

Question:  what about hiding content based on context?  How do you handle that? Well, that’s probably not the best way to go…you’re probably doing it wrong if you’re doing it that way.

 

BONUS CONTENT:  CRASH COURSE IN RWD

Foundations of RWD:  fluid foundation, media queries, responsive images.  In short:  Make your layout flexible!

  • Use ratios (ems) and percentages instead of absolute values (px).
  • Adapt to the size of the viewport:  width = device-width, initial-scale=1
  • What apple does is assume that the normal viewport size is 960 pixels, so if you don’t add the viewport declaration, you can get pages with text that’s very small-looking on a small screen.
  • Media Queries in CSS:  start with smallest screen first, and then the larger screens are additive over that definition.
  • Responsive Images:  for simplicity’s sake, start with this: use max-width:100%, height: auto;
Categories
Accessibility Technology Uncategorized

Quirks in Web Standards, Browsers, and Screen Readers

This is my second session of the first day at the CSUN conference, and “takes a look at quirks and bugs in browsers and screen readers, what they mean for users, and how to avoid, fix, or work around them.”  (description is from the conference session guide).  As someone who is actively involved in building consistent web experiences (i.e. browser compatibility), I’m interested in how Ian does this when you add accessible technology into the mix.  Any misinterpretations of Ian’s presentation are entirely my own.  Any errata, please let me know!

Presenter:  Ian Pouncey, Accessibility Specialist, BBC (@IanPouncey)

 

RESOURCES

 

SLIDE ONE

Ian took some time to establish his credibility…

Used to work on the Yahoo! home page, and more recently has been a web developer at the BBC.  Been doing this for about 14 years.  Written a book about CSS from Wrox publishing.

Room was made up of developers, technical folks, screen reader users, and those obliged to come (a little humor).

Ian shared a few slides that were pretty humorous related to “skip to content” links, wherein the humor was related to where you placed the accent on the word content (hopefully he’ll post those slides).

 

SLIDE TWO

Shared a couple pages about skip links from Gez Lemon and Terrilll Thompson, and then demonstrated dynamic skip link code with window.location.hash.

 

SLIDE THREE

Ian showed form error listings and focusing on fields with errors.  Unfortunately, window.location.hash only works reliably in IE!

Ian did a demo of a skip link bug when using off-page content on iOS7.  He prefers using content off-top rather than off-left.  This technique however can result in some interesting behavior, notably showing the skip link after an up-swipe gesture to turn on VoiceOver, and then navigation after that makes the screen go completely blank.

Another option is to use off-screen CSS clipping, which unfortunately results in a lot more code.  This clipping technique is used on about 98% of the BBC web site, so if you want to see it at work, head over there 🙂

 

SLIDE FOUR

Next, Ian shared an ARIA landmark bug in iOS6.  This bug basically announces ALL landmarks in VoiceOver as simply “landmarks,” which is not very helpful.  This bug is resolved by adding a heading to every landmark.

 

SLIDE FIVE

Finally, Ian shared a bug with Live Regions.  You can’t always rely on live regions to provide information to the AT at the appropriate time.

Solution:  JavaScript-added content can be read once it’s been written to the screen.  So, you can build an empty div that quietly waits for content to be written into it.  Be sure to use role status with the assertive attribute.

Categories
Accessibility Technology

Accessibility Features of HTML5

This is my first session of the day Wednesday morning, the first day of the CSUN conference proper.  I’m hoping to get a good overview of HTML5 accessibility features while at the conference this week, so I’m looking forward to this session.  This particular presentation is about the features available in HTML5 that you can use to improve the accessibility of your websites.  Session will include code examples and demonstrations, which I will do my best to capture in this post.

Presenter:

 

RESOURCES

 

 

SLIDE ONE

Introductions, he’s the staff contact for the HTML Accessibility Task Force

HTML5

Open Web Platform

  • Accessibility in the OWP

HTML5 Accessibliity / Demos

  • Improved Semantics
  • ARIA
  • Graphics

 

SLIDE TWO

A description of the HTML Accessibility Task Force.  They have a mandate to develop accessibility solutions, including technical reports, extension specifications, and provide integration paths.  They are about reaching consensus between groups.

 

SLIDE THREE

A very busy graphic describing the “Open Web Platform” which includes virtually any industry or device that needs to connect to the Internet.  Since the web is meant to connect everyone, it needs to be as accessible as possible.

 

SLIDE FOUR

Tim Berners-Lee:  The web is the great equalizer!

 

SLIDE FIVE

  • Added structured access through improved semantics.
  • Ability to bring desktop paradigms into the browser
  • More options for creating text equivalents for graphics
  • Native support for synchronized captions, sign language, internationalization and more.

 

SLIDE SIX

Mark presented a page of code, pretty standard for a blog.  He then ran through the page with VoiceOver.

 

SLIDE SEVEN

HTML5 Has Much Improved Semantics!

It allows you to describe your document structure with sectioning elements, including <section>, <nav>, <article>, <aside>, <header>, <footer>.  You don’t really have to do anything special except use a variety of new sectioning elements.  What makes a useful aside?  A pull quote, comments, etc.

 

SLIDE EIGHT

More information about new semantic elements:  color, date, datetime, email, month, number, range, search, tel, time, url, week

Not all of these will necessarily make it into the HTML5 spec.

 

SLIDE NINE

New attributes for the input element, including autocomplete, autofocus, autosave, list, max/min/step, maxlength, pattern, required, spellcheck.

These will help build forms much easier, particularly the task of form validation.

 

SLIDE TEN

ARIA Landmark roles include:  application, banner, complementary, contentinfo, form, main, navigation, presentation

The ARIA specification was built to provide expose the accessibility API (roles, states, and properties) to accessible technology.  It was designed to allow dynamic web pages to be accessible and provide a more consistent experience for all users.

 

SLIDE ELEVEN

ARIA

  • Accessibility for dynamic content (can be reparative, too)
  • Wired into accessibility APIs (roles, states, and properties)
  • Programmatically link elements with labels and descriptions (aria-label, aria-labelledby, aria-describedby)

 

SLIDE TWELVE

Back to the code sample, only this time Mark replaced most of the <div> elements with a selection of the landmarks noted above.  By marking up the search field, the OS  (Mac in this case) used system styles for the search bar and button.  The demonstration highlighted how VoiceOver announces each section, and using the rotor control, allows the user to directly choose from the various page elements.

One of the things I noticed as Mark was editing his code is that using HTML5 makes hand-coding of web pages and identification of code segments much easier.  If you’ve ever had to deal with multiple nested <divs>, this is definitely something you’ll appreciate.

 

SLIDE THIRTEEN

Much of the benefits come when you view an HTML5-coded page on different platforms.  For example, iOS will bring up a numeric keypad when entering digits telephone number field.  For dates and hours, the OS will present the appropriate pickers, like a calendar or a clock.

 

SLIDE FOURTEEN

Graphics

  • Added <figure> and <figcaption> to allow grouping of images with their description.
  • Provides equivalent interactivity and behavior for dynamic and/or bitmap images <canvas>
  • Provide extended descriptions for complex images <longdesc>
  • Provide detailed guidance to authors  ALT Guidance (4.7.1.1)

 

SLIDE FIFTEEN

  • Canvas allows interaction with pixels on a page, but…they’re just pixels in a box.  Solution:  specify regions
  • How do we define roles/states/properties?  Solution:  map tose regions to Fallback Content.
  • How do we indicate focus?  Solution:  drawFocusIfNeeded()

 

SLIDE SIXTEEN

Mark gave a demonstration of canvas that showed a list of two elements with checkboxes.  What makes this interesting is that those elements were literally just pixels painted onto the screen, but used fallback content.

 

Unfortunately, Mark’s presentation was rather long, and he was unable to complete his presentation and we missed out on seeing his slides on media.