Thursday, May 30, 2013

Authorizer -- a breach in the ACM Digital Library paywall

The Association for Computing Machinery (ACM), the leading computer science professional society, publishes many technical journals. The articles are online in the ACM Digital Library, but one must either be a member or pay to access them.

I am nearly two years behind the times, but I just learned that, in fall 2011, ACM decided to allow authors to publish links to their Digital Library articles on their own Web sites.

ACM calls the service "Authorizer," and it enables an ACM author to publish links to their articles on their Web sites. Users who follow those links will get free copies of the articles.

The decision to make the articles available is left up to the author, but doing so is very simple. The author merely registers a free Digital Library site and completes a form giving the URL of his or her Web site. A single click generates an HTML document with links to the articles in the Digital Library. The author can put that document on their Web site as is or edit as they wish.

I asked Bernard Rous, Director of publications at ACM, how many authors had signed up reported that 1,850 authors had created 14,000 links resulting in 44,000 downloads.  That is a small percentage of the Digital Library and its authors, and ACM is going to contact authors and publicize the opportunity.

If an ACM author does not bother to Authorize their articles, they will remain behind the ACM paywall, but Authorizing access takes just a few minutes and I would think most authors would jump at the chance.  I Authorized my ACM articles in about in about five minutes and they are now available online.  I can spend a little more time, and add an abstract to each when I find the time.

I understand that that the Authorizer services is not enough to satisfy open access purists, who would prefer that the copyright remains with authors, leaving them free to place their work in the public domain or use a Creative Commons license, but it is a big step in the right direction.

Tuesday, May 28, 2013

Are poor students excluded from online education?

Can peer-peer support compensate for disadvantages?
Corey Davis, director of online learning at Our Lady of the Lake University, was interviewed recently on the Chronicle of Higher Education's Tech Therapy podcast. Davis says the discussion of MOOCs and other online courses often fails to consider minority (poor) students and the obstacles they face.

Davis addresses two general issues -- lack of access to technology and poor preparation in terms of both technical proficiency and conception of education.

He has addressed the technology access problem by developing an online course for Latino oil workers, who only have access to mobile phones, not computers with broadband connections. Presumably those workers also have poor computer/Internet skills and do not have an academically-oriented background.

The problems he addresses are very real, and his effort valuable, but there are a couple of problems with the mobile, low tech approach.

Let's imagine the same course, developed twice -- once for delivery on a broadband-connected computer and once for delivery on a 4G cell phone. Which will be the most frustrating and confusing? Taking the course using the computer will be simpler, faster and less frustrating than using the phone. If we restrict this to a 3G phone, the gap will be wider and if we restrict it to a 2G phone (most common among poor people world wide) it will be impossible to deliver the same course.

The people Davis wishes to support need and deserve the broadband/computer version.

The second problem has to do with attitudes toward and expectations about education. Someone raised in a family that values education and has learned to learn effectively has an advantage regardless of technology access or skill.

One solution to this problem would be monitoring and mentoring students who are doing poorly, but that is expensive and does not scale to MOOC proportions. Might a course on how to take a course help?

Explicit support of student peer groups might also be helpful -- formally tying the success of each group member to the performance of the group as a whole. The efficacy of peer tutoring is well documented. (This is reminiscent of both executive bonuses on Wall Street and the five-member groups of Bangladeshi village women who receive Grameen micro-loans -- quite a range).

I don't have a lot of solutions, but, as Davis says, these are important problems. If they are not addressed, online education may be part of the inequality problem, not part of the solution.

-----

Update 5/29/2013

I just finished this post saying we would be short-changing poor students if we only gave them material on cell phones, but, I do a lot of work in developing nations and there are things we can do now -- even with 2G phones.

FrontLineSMS has an SMS server that has been used for many applications in developing nations.  I just learned that they have an education-oriented version in beta.  Check it out.

I've got to throw in a photo that I show my classes -- motivated students studying under street lamps because they have no electricity at home:



Netflix drops Sponge Bob Square Pants (and why it matters)

Netflix is on a roll. They had the first big "made for the Internet" drama, House of Cards; they are by far the largest source of North American Internet traffic and their stock price and subscriber rolls are growing.

But, when my grand daughter and I settled down to watch an episode of Sponge Bob Square Pants yesterday, it was not available! (It still turns up as a search choice -- a tantalizing bug for fans of The Sponge).

A quick Google search turned up this quote in Netflix' bullish April 22nd, 2013 letter to shareholders:

As we continue to focus on exclusive and curated content, our willingness to pay for non exclusive, bulk content deals declines. At the end of May we’ll be allowing our broad Viacom Networks deal for Nickelodeon, BET, and MTV content to expire.
That sounds good in theory, but in practice, they zapped Sponge Bob, one of the leading entries in the frequently-viewed list in my house. We are still a long way from ala-carte TV.

-----

Update 6/5/2013

Sponge Bob is back -- Amazon picked up the children's programming that Netflix dropped.

This is a strange market -- -- we have two (three if you count Hulu) buyers and one seller (more if you think of the item for sale as TV content in general).  Kind of an oligopsony, but not.

Monday, May 27, 2013

President Obama's effort to open healthcare data is paying off

President Obama and his opponents agree that market competition is a good thing, as long as the market is fair and free. A free market requires information, and Tom Friedman's latest New York Times column is about Obama administration initiatives to make healthcare data available online.

Friedman says it started in March 2010 when Health and Human Services (HHS) met with 45 entrepreneurs and offered them access to aggregate data on hospital quality, nursing home patient satisfaction and regional health care system performance. Ninety days later, HHS held a “Health Datapalooza” — a public event to showcase innovators who had harnessed the power of that data to improve health and care.

The entrepreneurs showed up and demonstrated over 20 new or upgraded apps they had built that leveraged open data to do everything from helping patients find the best health care providers to enabling healthcare leaders to better understand patterns of health care system performance across communities.

Free markets with information for consumers — sounds like something even Tea Party members might like. They can see more at Health Datapalooza 2013 next month.

In the mean time, they can check out this "infomercial" — they'll like it.

Thursday, May 23, 2013

Report on public, online education in California

The Twenty Million Minds Foundation has published a report on using online education to address bottleneck courses in California.

They survey the current efforts (California Virtual Campus, Cal State Online and UC
Online), outline approaches for the future and and offer a number of recommendations, including:

1. Making sure students have access to support services and academic mentoring, akin to what many institutions already offer within their own classrooms, but extended outside the parameters of the campus, when their students, for example, sign up for third-party offerings

(Important because "Online education classes typically require more self discipline, better reading skills, and better awareness of when to seek help than traditional classes do" -- for example see http://bit.ly/URnHKA).

2. Encouraging faculty to experiment with the integration of technology into their instructional practices

(Important because we are at the start of an expansion of online services that will enable faculty to develop novel online material -- for example, see http://bit.ly/121z9rU).

Innovation from MOOCs and modular courseware

I gave a talk on innovation from MOOCs and modular courseware at EduSoCal 13 last Tuesday.

The presentation format is annotated slides, most of which are self contained -- images with long captions. A couple of examples:
  • Money is flooding into educational technology
  • Google is in a good position with their forthcoming play store for education, open source MOOC platform, Hangouts, Google Plus and Google Docs.
  • Thirteen sources of modular courseware
  • Fifteen historical new medium innovations
  • Eleven areas for MOOC innovation
You can download the presentation here.


Wednesday, May 22, 2013

The first business application of IBM's "cognitive computer," Watson

IBM has a General Manager for Watson Solutions, implying they expect Watson to do more than beat people on the Jeopardy quiz show.

That General Manager is Manoj Saxena, who announced the first Watson Solution -- a call center service. In the following video, he says call centers receive 261 billion calls per year and human operators perform poorly, resolving only half the issues people report.



The wording of the video is interesting. Mr. Saxena states that Watson is the first of a new generation of "cognitive computers ... a new class of machines that understand human language and are able to learn themselves and program themselves."

There are two Watson call center models -- agent-assisted and self-service -- and IBM says they have ten customers.

When the self-service applications are up and running, it should be possible to conduct a special-case Turing test to see if users can tell whether they are talking with a human or with Watson. I'll be impressed if it passes!

Monday, May 20, 2013

New UI and content incompatible with older Roku boxes

Roku recently changed their user interface and added a lot of new PBS and PBS Kids content to their offering, but older models cannot run the new version of their software or display the PBS content.

I have two Roku boxes. One received the Roku Version 5 update, but the other is older hardware and is stuck with Version 3. (Was there a Version 4)?

I do not care all that much about not being able to use the new, improved interface, but I was surprised and disappointed that I could not see the new content. In retrospect, I can think of business and technical reasons for Roku not supporting the new content format.

Well, I am going to have to replace my older Roku, and, this time, I will be aware that it may not be capable of playing future Roku content. It would also be good if they published a list of their channels, indicating which ones are compatible with which hardware.

Saturday, May 18, 2013

Sixty Minute interviews of Bill Gates on the early days, philanthropy and Steve Jobs

If you missed Bill Gates' appearance on Sixty Minutes last Sunday, you should check it out.

(Transcript)

Gates talks about his goals of eradicating diseases like malaria and tuberculosis and improving health and standard of living in the poorest developing nations.

While he was at Microsoft, Gates quipped that he spent his mornings trying to make money and his afternoons giving it away. He is now a full time philanthropist, looking for appropriate technology and choosing and managing philanthropic projects as a hard-nosed business man.

You also get a look at Gates as a person -- he reads and watches videos of college lectures voraciously. He also became emotional when talking about his relationship with Steve Jobs and his last visit near the end of Jobs' life.



In a separate video, Gates also talks about the school where he learned to program



These are worth watching.

-----

Update 5/18/2013

Bill gives away a lot of money, but there is more where that came from -- he has regained his spot as the richest person in the world.

Friday, May 17, 2013

New Yorker Strongbox -- citizen+mainstream journalism

Internet-based citizen journalism predates the Web and has played an important role in hotspots around the world. Conventional jourlists typically rely upon and magnify the impact of citizen journalism, as exemplified by the use of Twitter by television stations in the coverage of the Boston Marathon bombing.

The New Yorker Magazine has taken another step toward citizen+mainstream journalism -- The New Yorker Strongbox -- a secure, anonymous means of communicating with New Yorker editors.

Strongbox uses Tor servers to hide the identity and location of a citizen journalist who submits automatically encrypted messages and files to the New Yorker. The New Yorker editors communicate with the submitter using a randomly generated code name and they have no way to learn who the person is or where they are located.

In the past, Wikileaks has served as an intermediary between anonymous citizen journalists and mainstream publications. The New Yorker is now directly reachable. Will other mainstream publications follow their lead?

(If you cover citizen journalism as a teacher, these teaching modules may be of interest).

-----

Update 5/17/2013

Steve Gibson talked about Strongbox during episode 404 of his Security Now podcast. The Strongbox segment begins at the 22m 50s point. Gibson talks about the history of the project, which was developed by Internet Activist Aaron Swartz. Gibson expects many news organizations to follow the New Yorker's lead and he pointed out that the open source project is freely available on GitHub.

Thursday, May 16, 2013

Google Play Store for Education and the new 3Rs

Google announced Google Play for Education at Google I/O yesterday.

The store will be geared toward K-12 schools and organized by subject and grade, as shown here.

The apps will be focused on the Common Core State Standards which spell out the language and math skills and concepts students are expected to learn for success in college and careers.

The subject-grade organziation and focus on The Common Core, will make discovery of good apps relatively efficient.

The store will not be online till next fall, but they are encouraging developers to build apps now.  I'd be surprised if folks like The Khan Academy and MathisPower4U were not far down this path already. If they and independent developers and creative teachers begin building apps, we may see a loosening of the textbook publisher's grip. (Apple can't be too happy about this announcement either).

This is a K-12 Play store, but it sounds like it could be used as remedial source for today's university students -- you would be astounded by the math and language skills of many of today's undergraduates.

Some may chafe at the standardization and focus of Google Play for Education, but it only addresses a portion of the school day and the material is essential -- the 3Rs of our time.

Wednesday, May 15, 2013

Google Hangout for a synchronous class meeting: Fail today ... Win tomorrow

I was travelling, so tried to use a Google on air Hangout for a class session. It was A Fail, but I am optimistic about the future. Let's talk about the failure first, then the rosy future.

My goal was to flip back and forth between talking with the students in the hangout and displaying and narrating PowerPoint presentations. The students who arrived too late to join the hangout would watch it on air and participate in the chat stream.

The session was a failure. Many students had unacceptably slow response time and I was wasting too much time switching between the conversation and PowerPoint. And that was the good bad-news. The real bad news was that the hangout crashed. We restarted it once, then gave up after the second crash.

I convened the hangout and was on a cable connection with Ping time around 20 MS and 15 Mbps up and 1.5 down, but the student's connection speeds and computers varied significantly.

I stopped all applications except Google Plus and PowerPoint on my computer, but forgot to advise the students to do the same.

My Dell laptop is a bit long in the tooth, but seemed to be up to the task. It has 8 GB of RAM, an Intel Core 2 CPU with a 3.06 Ghz clock speed and a 256 GB flash drive.

After several switches back and forth between displaying PowerPoint and the hangout, my computer froze with its fan running at full speed. I brought up the Task Manager and discovered that I had memory to spare, but CPU usage was flat at 100%. Perhaps there was some fatal interaction between Google's software and PowerPoint -- I'm not sure.

I restarted, the students joined back in, and after some time, my machine crashed again, so we gave up.

Even when it works, the user interface for switching between the hangout and PowerPoint is unnecessarily clumsy and distracting -- taking time and several mouse clicks. You should be able to select the windows you plan to use during a hangout when you set it up and switch between those windows and the hangout and chat windows with the touch of a finger or a mouse click.
The students asked afterward why President Obama's hangout went so well and ours so poorly. We speculated that Google may have provided dedicated resources for the presidential hangout and also screened the participant's connectivity and computers.

Well, that was the bad news. How about the good news. There is no doubt in my mind that Google Hangouts will become a valuable educational tool -- used for both synchronous class meetings like the one I tried and student study and project groups.

Google Hangouts have been available for perhaps a year. Hangouts On Air for less time than that. We are using an early prototype. If you think my experience in this class hangout was bad, watch video clips of other early prototypes, like Ivan Sutherland's demonstration of Sketchpad, the first significant computer graphics program or Doug Engelbart's demonstration of just about everything you take for granted today in your direct manipulation, windowed user interface or consider the hardware and systems improvements we have witnessed since the Wright Brothers first flight or the invention of the transistor.

Moore's law will improve the computers we use in hangouts -- my next laptop or tablet and those of my students will be a lot faster. The same will be true of data centers and servers  -- driven by improving technology and a need to handle ever more video traffic, which already dominates the Internet.  Broadband networks will improve with technology advances and hopefully a bit of competition. (Will Google Fiber go nationwide?). Google may  also decide to directly compete in the online education market or they may be content be infrastructure providers for others or both, but I expect that Hangouts 2020 will be as common in the Internet "classroom" as chalkboards and white boards are in the campus classroom today.

Tuesday, May 14, 2013

President Obama selects Tom Wheeler as FCC chairman

The President has selected Tom Wheeler, former lobbyist for both the cellular and cable industries and a major contributor to the Obama campaign to head the FCC, and AT&T and Comcast are both lauding the appointment.

That smacks cronyism -- the revolving door between industry to government.

I signed a petition to name Susan Crawford next head of the FCC, but will keep an open mind.  Wheeler may have been a lobbyist for the cable and cellular industries, but he was also an Invited Expert by the The President's Council of Advisers on Science and Technology (PCAST), which issued a report calling for the use of smart radios in sharing federal spectrum. He presumably endorses (or at least understands) the report of the PCAST Spectrum group.

His October 2011 blog post Updating Spectrum Policy provides further evidence that he "gets" IP and unlicensed spectrum. Here are a couple of quotes from that post:

"Exhibit A for 21st century spectrum planning is WiFi. Operating in unlicensed spectrum, WiFi is a cacophony of competing claims for use of the spectrum. The characteristics of Internet Protocol (IP) packets allow WiFi in a Starbucks hotspot, for instance, to operate more efficiently that the licensed spectrum on the sidewalk outside."

and

"It is time to abandon the concept of perfection in spectrum allocation. The rules for 21st century spectrum allocation need to evolve from the avoidance of interference to interference tolerance. We’ve seen this evolution in the wired network; it’s now time to bring the chaotic efficiency of Internet Protocol to wireless spectrum policy."

Don't forget that fiercely anti-Communist Richard Nixon, opened US relations with China. Perhaps Dark Side lobbyist Tom Wheeler will modernize wireless IP communication.

-----
Update 11/11/2016

This post has turned out to be true -- Tom Wheeler has acted against the wishes of his old industry friends. He turned out to be something of a sheep in wolf's clothing. There is speculation that Donald Trump will reverse Wheeler's stance on network neutrality and I will be pleasantly surprised if his FCC appointees pursue his proposal for a standard TV-interface box that combines the functions of today's set-top boxes and Internet interfaces.

Samsung says they've made a wireless breakthrough

Samsung says they are running high-frequency wireless connections at 1 gbps and suggests that this technology could form the basis of commercial 5G networks by 2020.  That sounds good, but on second thought it raises a lot of questions, like:
  • How is power consumption?
  • What about our lame data caps? Will you hit your cap in the first hour of the month?
  • What about backhaul from those gigabit towers?
  • Don't high frequency signals attenuate rapidly as one moves away from the antenna? How many base stations will have to be added?
  • Aren't high frequency signals more readily absorbed by obstructions?
  • If implemented, how well would this technology compete with WiFi, especially if Google or others were to deploy it widely?
Can you think of other "how-abouts?"

Saturday, May 11, 2013

An informative article on Netflix in Bloomberg BusinessWeek

The Netflix article is a profile of the founder, Reed Hastings, and goes into some detail on their technology and strategy. Some of the points that struck me were:

  • In spite of the fact that Netflix and Amazon are direct competitors in the IP video market, Netflix is hosted on Amazon Web Services.
  • Netflix streams during the day and analyses data at night. They load servers between 2 and 5 AM local time. Shows they predict will be popular are served from flash storage.
  • The master copies of all the shows and movies available to Netflix take up 3.14 petabytes of storage space. Netflix compresses the master files creating more than 100 different versions, each tuned for the varying bandwidth, device, and language needs of its customers. The compressed catalog is about 2.75 petabytes.
Check the article for more on the technology, Hastings and Netflix.

North American downstream traffic

-----

Update 6/18/2013

*Cogent says Verizon is not provisioning capacity to handle Netflix traffic*

From a GigaOm article at http://bit.ly/11XvMzn:

Cogent and Verizon peer to each other at about ten locations and they exchange traffic through several ports. These ports typically send and receive data at speeds of around 10 gigabit per second. When the ports start to fill up (usually at 50 percent of their capacity), the internet companies add more ports. In this case, through, Verizon is allowing the ports that connect to Cogent to get crammed. ”They are allowing the peer connections to degrade,” said Dave Schaffer, chief executive officer of Cogent said in an interview. “Today some of the ports are at 100 percent capacity.”

Thursday, May 09, 2013

Amherst and San Jose State University -- two approaches to MOOCs

There has been recent news and controversy surrounding the relationship of San Jose State University and MOOC providers Udacity and edX. I was at a conference the week before last and met Catheryn Cheal, Associate Vice President and Senior Academic Technology Officer at SJSU, who is administering the MOOC experiments at San Jose State.

She confirmed a positive result -- students taking a circuits and electronics class did better in a section that used edX material as a supplementary "text" than those in other sections. Based on that experience, they plan to offer more blended classes and will establish a Center for Excellence in Adaptive and Blended Learning.

That sounds reasonable to me. A professor had students subscribe to a MOOC in the same way as he or she would ask them to buy a book or read some articles. The results were encouraging, so the university established a center for research and training on that as a pedagogical technique.

But, SJSU has taken a second step that is less conventional. Ms. Cheal confirmed that they are now offering some classes -- elementary statistics for example -- that will substitute for on-campus classes. Students will get credit at San Jose State and the credits will be "transferable to most colleges and universities nationwide." (In a FAQ, they admonish the student to check with their school).

I am not anti-MOOC. I was at the conference I mentioned above to give a talk on the innovation in pedagogy, technology and school and social systems that I expect to come from MOOCs and modular courseware, but SJSU seems to be moving very fast.

The elementary statistics course mentioned above will be a general education course. On my campus, a faculty committee has to approve general education courses. Based on the course description, it seems to be a typical introduction to descriptive statistics and simple inference, but many majors offer such courses -- education, nursing, business, psychology, etc. Again, department and school curriculum committees traditionally specify curriculum.

I understand the desire to eliminate "bottlenecks" in student progress through the university and, as I mentioned, I expect far reaching innovation from MOOCs and modular courseware, but I am curious to know the decision to accept these particular courses for credit was made. (That is not to say it may not have been the correct decision).

We can contrast San Jose State with Amherst, which recently decided not to offer courses through edX, but to experiment on their own. That strikes me as a prudent middle ground. The university will learn and will become self-sufficient. For example, they might install an open source MOOC platform from Google or from Stanford/edX and let their faculty experiment by developing full courses or modules to supplement their current courses or they might decide to wait until hosted versions of those platforms become available. Amherst will learn and remain in control of its destiny.

Organizations often outsource peripheral functions in order to focus on their core mission. Outsourcing fast food service to Taco Bell might make sense, but teaching is one of the core missions of a university, and we should not rush to outsource it -- to a MOOC provider, a textbook publisher or anyone else.

-----

Update 5/12/2013

Google tells me the following universities are offering MOOCs using their open source platform, Course Builder: North Carolina State, Indiana, Cornell University, Universitat Politècnica de València, and the Indian Institute of Science, Bangalore.

They also say we can expect Course Builder to become increasingly easy for non-technical faculty to use, but will not comment on the possibility of offering it as a hosted service -- we will have to wait and see about Google MOOCs.

Update 5/13/2013

Robert McGuire provides more details on the blended circuits class mentioned above in this post.

Update 5/14/2013

San Jose State is not the first university to give credit for taking a MOOC. Six European universities gave credit to students who took Stanford's AI course online last year. Students had to go to the University of Freiburg in Germany to take a proctored final exam. San Jose State exams will be proctored using a web cam and screen capture by ProctorU.

Do you know of other cases in which college or university credit was awarded for successful completion of a MOOC?

Wednesday, May 08, 2013

Citizen journalism and television coverage of the Boston Marathon bombing

Citizen journalism on the Internet began before the invention of the Web. Today it is a significant source of news. Emily Tolan's nine minute video on the Boston Marathon bombing illustrates the interplay between citizen journalism and television.

The video is a chronological summary of events beginning with the explosions caught on cell phones and ending with Twitter and TV coverage of the capture of the second suspect.



For more on citizen journalism, check these teaching presentations.

Will Google Fiber go nationwide?

Today's New York Times has an article on yanking US broadband out of the slow lane. Might Google Fiber inspire broadband competition? Better yet, might it be broadband competition? (One also wonders why this article appeared now -- might it have been encouraged by Google PR)?

The article presents a good overview of the mediocre state of broadband connectivity in the US. It prominently features Google Fiber as a possible solution, quoting Milo Medin, who heads the Google Fiber project and was a co-founder of @ Home Networks, a pioneering first attempt to bring broadband to homes shortly after the passage of the 1996 Telecommunication Act (which was designed to create competition, but failed).

Google's announcement that they would install Google Fiber in Provo, Utah, drove speculation that they were planning to go nation wide. This article does nothing to dampen that speculation.

-----

Update 5/31/2013

Speaking at the Fiber-to-the-Home Council meeting, Milo Medin, Vice President of Access Services for Google, told an audience of city planners, engineers, and mayors that Google Fiber is a business that they expect to make money from -- "a great business to be in."

Medin admitted that at first Google didn't see Google Fiber as a viable business -- it was to be a testbed for Google services. At that time, Google was lobbying for a Gigabit networking bill in Congress, but "someone on the management team" said "If we really think this is important, why whine to the government, when we can do it ourselves?"

Rather than worry about Federal or State governments and subsidies, as the phone and cable companies do, it seems that cooperation with cities is a strategic part of their plan.

The project began with a call for proposals from cities wishing to become the first gigabit testbed. Medin said "We thought a handful of cities would say they were interested ... Then we saw that 1,100 communities replied. No one at the time thought there was a real business here. But that changed when we saw the interest."

Google wants to be your ISP! Wow -- when do the come to Los Angeles?

-----

Update 6/25/2013

Seattle will have gigabit connectivity for $80 per month with no installation fee with a one year contract. This sounds pretty much like Google Fiber and lends credence to Google's claim that this is a real business.

One caveat -- it is not clear which parts of the city will be covered. As of last December, they spoke of 14 neighborhoods, shown on this map:



-----

Update 7/2/2013

This article and picture gallery profiles Startup Village, home to more than 20 startups in a cluster of small houses in Kansas City. The village was established to take advantage of Google Fiber, but the community of local start-ups is even more important than 1Gbps speed.


-----

Update 7/15/2013

More innovation spurred by Google Fiber -- The KC Gigabit Education Project.


-----

Update 8/1/2013

Google to offer Starbucks WiFi (http://bit.ly/142g2Mp). Google says that most locations should see 10x faster Internet speeds than currently available. Every single one of the over 7,000 locations will see this increase in speeds, and the rollout should be completed over the next 18 months. And, Starbucks in areas with Google Fiber access will utilize Google Fiber and its gigabit Internet speeds.

Bob Frankston (http://bit.ly/1edA9J7) has pointed out that as browsing speed rises, ad clicks rise, so Google has a hidden motive for gigabit speed.

-----

Update 8/5/2013

Japan and Korea lead in fiber penetration -- US 14th

The Organisation for Economic Co-operation and Development (OECD) reports that Japan and Korea lead the world in fiber broadband penetration. The US is 14th, trailing Turkey. I've given up on ever seeing FIOS in my neighborhood -- let's root for Google Fiber.

More statistics at the OECD Broadband Portal.

-----

Update 8/15/2013

DSL Reports has seen an internal memo sent to Comcast employees, which says they will revise their bundle offerings and pricing in Provo, Utah in response to Google Fiber.  Google is offering aggressive competition, and, if the leaked report is accurate, Comcast will still be much slower than Google.

_____
Update 10/26/2014

Google has a 180 day license to experiment with millimeter wirelss transmission. The high frequency transmission would cover only short distances, but, if they are thinking of taking Google Fiber nationwide, they may be looking for a technology to cover the last few hundred yards from an access point on a street to the houses on the block. Google (or municipally owned fiber) would provide high speed backhaul.

_____
Update 10/27/2014

Google is evaluating 34 cities for the possibility of installing Google Fiber.


-----
Update 10/29/2014

Google has a 180 day license to experiment with millimeter wireless transmission. The high frequency transmission would cover only short distances, but, if they are thinking of taking Google Fiber nationwide, they may be looking for a technology to cover the last few hundred yards from an access point on a street to the houses on the block. Google fiber (or municipal fiber, as in Stockholm) would provide high speed backhaul.

-----
Update 3/2/2015

Milo Medin, VP of Access Services at Google Fiber, spoke of problems they have dealing with city bureaucracy at the Comtel Summit last week.

Medin mentioned byzantine permission processes, inaccurate information about infrastructure and the reluctance of owners of multi-unit buildings to cooperate as hurting some cities chance to attract Google Fiber.

His remarks must have left folks from the incumbent ISPs smiling and mumbling "we told you so." They also make me curious as to the nature of the deals Google makes with the cities. Does Google expect some sort of advantage over the incumbents? Do they prohibit municipal ownership of infrastructure in the future?

Google is offering a terrific deal in Fiber cities today, but what will happen in, say, ten years if Google advertising revenue has flattened and the company has a lot of employees and overhead? Will they become just another oligopolistic ISP?

-----
Update 3/4/2015

Under Title II, Google can now access telephone poles, simplifying the installation of Google fiber, but what fees do they have to pay for that access and what sort of red tape permitting may they face?

When the 1996 Telecommunications Act ordered incumbent telephone companies to grant competitors access to their lines, the incumbents stifled those efforts. Could something similar happen with respect to phone pole access? (That is not a rhetorical question -- I don't know).

Monday, May 06, 2013

The Internet -- super fast, but simple; the brain, slower, but complex

Round trip packet time
Which is greater -- the round trip travel time for a data packet sent over the Internet from my home in Los Angeles, California to La Universidad de Magallanes (UM) in Punta Arenas, Chile or the time is takes me to see, recognize and pick up a pen?

This sounds suspiciously like one of those counter-intuitive questions a professor might ask to get a classes' attention, so let me end the suspense. It turns out that both take around the same time, 1/4 second. Let’s see how I reached that conclusion, starting with seeing and picking up a pen.

For that time estimate, we turn to brain science. Luckily, my favorite science podcast, Radiolab, recently broadcast a program on speed, which included a two-minute segment on the communication steps involved in seeing and picking up a pen. Stop and listen to the excerpt -- it enumerates the time spent traversing communication links between your eye and brain, between different regions within your brain, and finally to the top of your brain where the commands to pick up the pen are issued. The entire sequence takes about 1/4 second.

Seeing a pen and picking it up
That is not a great surprise -- picking up a pen seems really fast and automatic. But, what about the round trip time for a packet routed from my home in West Los Angeles to UM? I measured that time using Ping, a simple utility program that sends a packet to a remote host and measures the time it takes for it to get there and an acknowledgement to come back.

As shown here, Ping sent four packets to Punta Arenas. The fastest round trip was 233 milliseconds, the slowest 282 and the average of the four was 259 milliseconds -- just over 1/4 second.
I checked the question answering server at Wolframalpha and learned that the great-circle distance from Los Angeles to Punta Arenas is 6,644 miles. That says our average data packet traveled around 53,000 miles per second -- 28 percent of the speed of light!

Listening to the Radiolab clip, I counted ten communication hops in picking up the pen -- the first from the eye to the middle of the brain, the second from the middle to the back and so forth. Those hops are between clusters of neurons and the information travels along axons.

In the Internet, data packets hop from one router to the next. Routers are special-purpose computers that are programmed to forward packets from one network to another and they communicate either wirelessly or over some sort of cable. In the case of my test, the first hop was wireless -- from my laptop to my home WiFi access point. The second hop was from my home network to my Internet service provider's network over a copper cable. From there the links were all over fiber optic cables, some buried under ground, others under the sea, until they reached the destination computer at UM. The acknowledgement packets traversed a similar path in the opposite direction. Using a program called Traceroute, I counted hops through 18 routers between my house and UM.

Since the distances inside the brain are small, it seems that complex, time consuming processing is taking place in the neurons at each location and the signals between them travel relatively slowly. Contrast that with the Internet, where the communication links are fast, but the processing by each router is simple. It takes a router very little time to forward a packet to the next router along the path to Punta Arenas.
I don't know about you, but I find the speed of the Internet mind blowing but understandable and the complex processing needed to pick up that pen awe inspiring. One day, brain science will reduce both to just being mind blowing.

Saturday, May 04, 2013

New wine for new bottles -- innovation from modular teaching material and MOOCs

I received a nice award and gave a talk on "Innovation from modular teaching material and MOOCs" at the Sloan Consortium 6th Annual International Symposium on Emerging Technology for Online Learning (#et4online).

The slides from the talk are now online here.

Each slide has text notes so they should stand on their own.  The presentation also contains quite a few links as well as a recording of Wolfman Jack howling.


If you wish, you can watch a video of the presentation, but that would probably take longer than reading through the PowerPoints.

Friday, May 03, 2013

Six features I want in an educational video player

The London Olympics were streamed by NBC in the US and the BBC in England. I watched both and blogged about the experience. After it was over, I gave my "gold medal" to the BBC -- they did a better job. One of things I liked about the BBC coverage was their video player, which was more interactive than NBC's -- it was designed for the Internet, not adapted from television.

Video players for teaching should also be interactive -- we need more than a timeline with VCR pause and play buttons. Let me suggest a few things I would like to see in a video player for my students.

1. Clickable chapter headings I currently make teaching videos using Camtasia, which lets me create clickable chapter headings on the left side of the screen as shown here:



The BBC Olympic player also had clickable chapter headings, and they are on my feature list.

2. Variable playback speed When I listen to podcasts, I speed up the playback. As shown here, Coursera's video player allows the student to do the same:



I applaud their innovation, but it has to be combined with clickable chapter headings. When the student changes the speed, the chapter entry points must be recalculated.

In addition to adjusting the chapter start points, the player should be instrumented to see what speeds students use and when they change speeds. This data could be correlated with comprehension and retention as well as the underlying concepts being covered when students speed or slow playback. (My hypothesis is that retention and comprehension would not be diminished by, say, a 15% speed increase).

3. A quick-replay button NBC made a simple, but handy addition to their video controls. They added a button that backed up 15 seconds every time in was clicked. Click it once and the video backed up 15 seconds, click it twice and it backed up 30 seconds, etc. That would be great if a student missed a word or point and wanted to hear it again.



4. Programmed pause points with click to continue This is very simple. As a teacher, I want to be able to insert breakpoints, causing the video to pause while the student thinks about or does something. He or she would simply click a resume button when ready to continue.

5. "Tell me more" button The BBC Olympic player allowed the user to display ancillary information -- record times, athlete's statistics, current standings, etc. as shown here:



A teacher would use tell me more to add context to or paraphrase the point being made in the current chapter, simulating what happens in a classroom when a student is confused and asks for clarification. The teacher does not simply repeat what they had originally said, but uses different words, examples, images, notation, etc.

The student would learn to click this button when he or she found the current chapter confusing.

6. Study-group support Michael Wesch has pointed out that the architecture of our classrooms and lecture halls discourages student interaction. They face forward, looking to the teacher for information. I bet small children relate to each other in school, but I find my university students reluctant to work with or talk to each other during class.



I want a player that lets groups of students study the same video together at the same time. Sharing a video during a Google Hangout would be a step in the right direction, but control of the video would have to be distributed among the students. It would also require a search to discover study partners who were on line on the same lesson at the same time. Study-group support would require more thought and HCI design than my other suggestions, but it would be a terrific addition to the video player I would like for my students.

Well, those are my six suggestions. What would you like to see in a video player for education?

Wednesday, May 01, 2013

Hear a recording Alexander Graham Bell made in 1885

The National Museum of American History has released a totally cool reconstruction of an 1885 recording of Alexander Graham Bell reading his handwritten notes.

This video shows the notes scrolling slowly as Bell reads. (Be patient -- the very first part is hard to understand).



The noninvasive optical technique used to scan and recover the recording was invented at Lawrence Berkeley National Laboratory in 2002 with assistance from the Library of Congress. They have used it to reconstruct even older recordings than this one.

I love this sort of thing because it illustrates how we are able to start with very crude proof-of-concept prototypes and refine them over time. There are tons of examples in and out of IT, for example, the Wright brother’s first plane, early automobiles and Doug Engelbart demonstrating WYSIWG word processing, windows, the mouse and many other things at The Demo in 1968.

Doug Engelbart, The Demo