Categories
Technology

Machine Learning 101

Presenters

  • Greg Corrado, Senior Research Scientist, Google
  • Vincent Nestler, Professor & Assistant Director of Cybersecurity, CSU San Bernardino
  • David Vasilia, Enterprise Network Administrator & Faculty, CSU, San Bernardino
  • Internet2 & GCP: internet2.edu/gcp
  • CS edu grants: cloud.google.com/edu

Machine Learning 101

  • Already in everyday products: photos, inbox, maps
  • 2 disciplines: AI and machine learning
  • Traditional AI systems are programmed to be clever
  • ML-based AI systems are designed to learn to be clever
  • Classic AI works on rules and contingencies; ML AI learns from examples and data.
  • Machines learn by example: models (which have parameters) feed predictions, which feeds a learner, which in turn feeds the parameters. This is surprisingly simple and generic.
  • Need 4 things: computational resources, good tools & algorithms, training examples, creativity and ingenuity of people.
  • Effective, but very gradual process that takes millions or billions of examples for it to work. It needs to cycle many many times.
  • ML coming of age in this decade because the computational power is exists now and it’s cheap and plentiful enough, i.e. CPU, GPU, Google TPU.
  • tensorflow.org a toolkit for machine learning
    • Open standard
    • Next gen deep learning tools built in
    • One system flexible enough for ML research
    • Robust enough for use in real products
    • Same software Google researchers use
  • Deep learning not one function, but a set of composable subfunctions for model building.
  • Distributing ML Tech Globally
    • Shared Tools: TensorFlow + CloudML
    • Ready-made ML systems (Cloud Vision API, Cloud Speech API, Cloud Translate API, etc.)
    • Use our tools to build your own system!
    • Example: TensorFlow cucumber sorting tool (really!)
    • Shared knowledge: open research publication at intl conferences; global direct community education; funding academic research and education.
  • Google published 90+ papers in the last 4 years
  • Takeaways:
    • Differentiation between AI vs. ML vs. Robotics
    • It isn’t magic, just a tool
    • Machines learn best from examples
    • Why now? fast computation
    • Make ML work requires creativity/ingenuity, cheap/fast computation/examples to learn from (data), tools & algorithms, TensorFlow makes ML software available for free.
    • Google Cloud makes hardware available.

Cloud for Higher Ed

  • Programming a campus rover: students are given a sensor, a raspberry pi, and Python. Then, they need to figure out how to integrate it.
  • Hacking now means hacking things together. You don’t have to be an engineer and you don’t need to know everything.
  • How can I level the playing field for my students? Be able to connect to Chrome and a Google compute engine. Everyone can look at and work with this environment, and they can explore from there.
  • A project we worked on in class: Android mapping for WiFi signal strength on campus. War driving took signal strength and using mapping API to literally map it to a real topographical map. Now we can “see our WiFi.”
  • We used Intermapper software to map the Internet, specifically the CENIC network from Los Angeles. The students loved this.

Panel

  • What is the difference between deep learning and machine learning? ML is the larger field of making machines that learn. DL is a small subset of this.
  • How far is Google taking cultural sensitivity into account with ML? Take translate as an example: you can dig into what the algorithm did to come up with its response.
  • If we use a Google tool, does this tool report what it learns back to Google? NO. What is pricing model for Google Cloud for Google Apps customers? It is independent of G-Suite.

Next Steps

  • Google is now a member of Internet2.
  • Will work with universities across the US to explore how Google Cloud Platform can better serve higher education
  • Help students build what’s next!
  • GCP Education Grants are available to: faculty in US, teaching university courses in CS or related fields in 2016-17 academic year. Examples: general CS, Cybersecurity, systems administration, networking.
Categories
Accessibility Technology

CSUN 2014 Web Track Mega Post

As usual, I like to make a post that sums up my entire conference experience…I call this the “Mega Post.”  As you may have guessed from the titles of the sessions I attended, I’m interested in the web track.  If the web is your bag, you just might find all this helpful.

Enjoy!

 

Friday, March 21

 

Thursday, March 20

 

Wednesday, March 19

 

Tuesday, March 18

 

Monday, March 17

Categories
Accessibility Technology

All About Google Chrome

This is my fifth session from the first day at the CSUN conference.  This session covers “…the built-in accessibility features of Chrome, Chrome OS and Chromebooks.”  Description comes from the conference event guide.  I attended Google’s pre-conference seminar in 2013, and it was very informative (my 10-part blog post can be accessed here).  I hope they pack in the juicy details this year too 🙂

Presenters:

  • Dominic Mazzoni, Software Engineer on the Google Team (@)
  • Peter Lundblad, Engineer on the Google Chrome Team (@)
  • David Tseng, Software Engineer Google Chrome Team (@)

 

David Tseng showed off a remote control that comes with ChromeVox built-in.  It’s meant for video conferencing.  David used the tool to join a Google Hangout (a kind of vido call).  It worked well in the demonstration, at least from the perspective of selecting and joining an existing Hangout.

 

Dominic Mazzoni talked briefly about the importance of the web as the world’s largest open platform.  The Chrome browser was originally introduced with the following three principles/priorities in mind:

  • Speed:  re-introduced competition into the browser market
  • Simplicity:  create a browser that doesn’t distract from the content you’re looking at.  Also, updates happen automatically.
  •  Security:  updates resolve holes asap

Dominic jumped into ChromeOS and showed some of the accessibility features available, including on-screen keyboard, screen magnifier, large mouse cursor, high contrast mode, sticky keys, tap-dragging, and ChromeVox itself.

 

Peter Lundblad demonstrated ChromeVox, a screen reader made especially for ChromeOS.  Support for voices in multiple languages has been recently added; Peter demonstrated this with both German and British female voices.  Refreshable braille device support has also been added to ChromeOS.  This particular demonstration was interesting to me because I’ve never actually seen one of these devices in action.  There is a “growl-like” on-screen display of the braille output so sighted users can see what the braille device itself is showing.  Peter added a bookmark using the braille device.

 

Dominic then took over and talked about synchronized bookmarks (and other settings) that “follow the user” to whatever device they may be using.  He demonstrated this using an Android phone.  The phone he showed the audience successfully showed the bookmark that was set by Peter on the Chromebook a few minutes before.  Dominic then activated the local search control (a circular control with links to phone functions) by swiping up and to the right to activate the link.

Dominic then demonstrated the ChromeCast, which lets you “cast” content from any Chrome browser to a display the Chromecast is plugged into.  Laura Palmero shared her personal experience using the ChromeCast.  Laura is a person with a vision disability that makes it difficult for her to view things in the center of her field of view, so she relies on high-contrast displays that are close to her (like her phone).  This has made it much easier for her to interact with her large screen television at home…she now controls it using her phone, which she uses all the time.

 

Question:  what about the accessibility of Google Docs?  There is a Google Docs session tomorrow (Thursday) that goes into great detail about Google Docs.

Question:  what is the strategy with ChromeBook?  It seems like just an interesting toy.  Answer:  it’s not a general-purpose computing device that’s meant to replace all computers.  It’s a device that’s made to work with the web,

Question:  what tools are you providing so developers can have access to things like view source, that sort of thing?  Answer:  we know we have some work to do with this, but there are workarounds.  Please speak with us after the session.

Question:  how well does it support ARIA?  Answer:  we make extensive use of ARIA in our web apps, and we rely on open standards and participate in working groups.

Categories
Accessibility Technology

The CSUN 2013 Web Track Mega Post

Greetings, fellow web accessibilistas!  (not to be confused with accessiballistas, the little-known and even less-documented accessible siege engine of yore).

As you may have gathered if you followed my live blog posts a couple weeks ago, my interest in attending the CSUN 2013 conference was almost exclusively web-related.  Now that it’s been a couple weeks and I’ve had some time to reflect, I figured it would be a good idea if I consolidated everything into one mega-list.  This year, there were several times when I wish I could have been in two places at once.  Hopefully this gives you a pretty representative sampling of what was on offer web-wise this year.  Follow me @paulschantz for more web-related topics, including accessibility, project management, web development and design philosophy, thoughts on working in higher education, bad clients, off-color humor, and other ephemera.  Enough self-promotion…on with the list!

Pre-Conference Seminar:  Google Accessibility

Day One:  February 27, 2013

Day Two:  February 28, 2013

Day Three:  March 1, 2013

Categories
Accessibility Technology

Google Accessibility – Partially Digested Observations

Holy moly, that was an information-packed session today!  And, what a difference from last time I saw Google at #CSUN.

I saw Google’s presentation on accessibility when I attended the #CSUN conference (I believe) four years ago.  At that time, I got the impression that Google was “phoning it in.”  The reps they sent at that time were clearly lower-level, more tech-oriented staff and didn’t present their story to the #CSUN audience in a compelling or memorable way.  If I were cynical, I’d say their approach at that time smacked a bit of technical smugness.

Fast forward four years…

Today, there were no fewer than 15 people from Google, including product managers from their core applications, internal accessibility evangelists, and development staff.  They’re not messing around now.  Here are some of the standout items, from my perspective:

  1. Google’s development team is working hard to standardize keyboard navigation across their core applications.  This is huge, and will pay big dividends for all users in the very near future.
  2. For obvious reasons, Calendar was not mentioned much.  To Google’s credit, they did not evade critical questions.  Calendaring is freakin’ hard – my team made a public web calendar for the CSUN campus a few years back, and I can assure you that that effort was no joyride:  www.csun.edu/calendar
  3. Google acknowledges the problem of keyboard shortcut collisions.  Sorry folks, there are no “standard” keyboard shortcuts but the ones that have come about due to historical cruft of the software industry.  People using niche apps will unfortunately be caught in the lurch.  This isn’t all bad though, because…
  4. …Google’s larger plan is to have their entire ecosystem in the cloud.  Like it or not, this is the future of computing.  This hearkens back to a conversation I had with my Computer Science colleagues regarding “the cloud” about five years ago.  My question back then was “what happens when everything is available in the cloud?”  Answer:  “we pick and choose those services that we need and trust.”  Google is building those services today, and from what I can see, I trust that they’re working on it.  BUT…we have to continue to push for more accessibility.  If we don’t evangelize and make it a priority, it just won’t happen.
  5. Speaking of evangelism, I get the distinct sense that the push for accessibility within Google is an uphill battle at times, but the organization is really starting to “get it.”  Working as a director of a web development team in the CSU with responsibilities around ensuring accessibility on my campus, I can relate.
  6. The advances in accessibility built into the Android OS (Accessibility Service Framework and APIs) are downright impressive.  The work around creating an intuitive “navigation language” alone merits a gold star in my opinion.
  7. Google’s position of supporting current browser version “minus two” is a goddamn blessing and should be shouted from the mountain tops.  I feel very strongly about this, and have written a browser support statement to clarify why I take a similar position with my team’s work:  http://www.csun.edu/sait/web/browsers.htm

Maybe it’s just me being cynical again, but I could kind of sense a faint hint of technical smugness today.  Its character was different though, and I think that comes from the audacious scope of what Google is trying to do as a company.  When you throw around statements like “the web is our platform,” I guess it’s hard to be humble.

%d bloggers like this: