In the world of the Internet of Broken Things, there is nothing more impressive to me than the fact that these things actually sell as well as they do. The risks associated with internet-connected devices seem insurmountable, save for the fact that we are all cattle being marched along to the slaughterhouse, our faces as serene as could be. Between companies simply deciding that supporting these products isn't worth it any longer and reducing functionality, firing off firmware updates that simply kill off selling-point features, or leaving security holes wide enough to drive a malicious creepster through, it seems that very little thought goes into the fact that customers are, you know, buying these things. Once that purchase is made, how long that purchase is functional and secure appears to be an afterthought.
But the risks apparently don't end there. Let's say you bought an IoBT device. Let's say you enjoyed using it for months, or years. And then let's say that the company you bought it from suddenly got sued for patent infringement, settled with the plaintiff, and part of that settlement is, oops, your shit doesn't work any longer? Well, in that case, you're an owner of a Flywheel home exercise bike, which settled for patent infringement with nevermind-you-already-know-who.
Every morning at 4:30AM, Shani Maxwell would throw on her Flywheel T-shirt and hop on her Fly Anywhere bike. An avid fan who’s been riding with Flywheel since 2013, she’d leapt at the chance to own the company’s branded bike when the company released its Peloton competitor in 2017.
So it came as a surprise when she received an email from Peloton, not Flywheel, informing her that her $1,999 bike would no longer function by the end of next month. Flywheel settled a patent dispute with Peloton earlier in February and decided after the lawsuit to discontinue the at-home bike product.
“It shocked me,” Maxwell says. “We knew the lawsuit was in progress and we heard the settlement had been reached — we just didn’t realize they would shut down. ”
In fact, I'm sure Maxwell wasn't even aware that it was a possibility that the product she bought just wouldn't work anymore some day. Due to some intellectual property dispute to which she wasn't a party. To be clear, there wasn't any real choice given in any of this, either. The settlement included having Peleton reach out and offer to replace the Flywheel bike with a refurbished Peleton. If the customer didn't want the used Peleton bike, well, they could fuck right off with no recompense.
It's important to keep in mind at this point that people paid for these bikes and the service they came with. Paid very real money for a product that, poof, disappeared one day. Most Flywheel customers apparently took the deal with Peleton. After all, the other option sucks out loud. Some of them were quite mad about it.
But most? Well, serene-faced cattle marched towards the slaughterhouse.
For Podnos, the Flywheel experience was just another lesson in taking a chance with the Internet of Things. “It’s the risk you take when signing up for a platform that is still in development. It was a risk factor that we weighed from the onset, and were comfortable with,” he said. “I don’t think it will dissuade me from trying new IoT services, but it’s certainly a cautionary tale that consumers should be aware of.”
This is why we can't have nice things. Or things at all, it seems.
There’s widespread concern that video cameras will use facial recognition software to track our every public move. Far less remarked upon — but every bit as alarming — is the exponential expansion of “smart” video surveillance networks.
Private businesses and homes are starting to plug their cameras into police networks, and rapid advances in artificial intelligence are investing closed-circuit television, or CCTV, networks with the power for total public surveillance. In the not-so-distant future, police forces, stores, and city administrators hope to film your every move — and interpret it using video analytics.
The rise of all-seeing smart camera networks is an alarming development that threatens civil rights and liberties throughout the world. Law enforcement agencies have a long history of using surveillance against marginalized communities, and studies show surveillance chills freedom of expression — ill effects that could spread as camera networks grow larger and more sophisticated.
To understand the situation we’re facing, we have to understand the rise of the video surveillance industrial complex — its history, its power players, and its future trajectory. It begins with the proliferation of cameras for police and security, and ends with a powerful new industry imperative: complete visual surveillance of public space.
Video Management Systems and Plug-in Surveillance Networks
In their first decades of existence, CCTV cameras were low-resolution analog devices that recorded onto tapes. Businesses or city authorities deployed them to film a small area of interest. Few cameras were placed in pubic, and the power to track people was limited: If police wanted to pursue a person of interest, they had to spend hours collecting footage by foot from nearby locations.
In the late 1990s, video surveillance became more advanced. A company called Axis Communications invented the first internet-enabled surveillance camera, which converted moving images to digital data. New businesses like Milestone Systems built Video Management Systems, or VMS, to organize video information into databases. VMS providers created new features like motion sensor technology that alerted guards when a person was caught on camera in a restricted area.
As time marched on, video surveillance spread. On one account, about 50 years ago, the United Kingdom had somewhere north of 60 permanent CCTV cameras installed nationwide. Today, the U.K. has over 6 million such devices, while the U.S. has tens of millions. According to marketing firm IHS Markit, 1 billion cameras will be watching the world by the end of 2021, with the United States rivaling China’s per person camera penetration rate. Police can now track people across multiple cameras from a command-and-control center, desktop, or smartphone.
While it is possible to link thousands of cameras in a VMS, it is also expensive. To increase the amount of CCTVs available, cities recently came up with a clever hack: encouraging businesses and residents to place privately owned cameras on their police network — what I call “plug-in surveillance networks.”
Video from surveillance cameras around the city is displayed at the Real-Time Crime Center, the viewing space for Project Green Light, at the police department headquarters in Detroit on June 14, 2019.
Photo: Brittany Greeson/The New York Times via Redux
By pooling city-owned cameras with privately owned cameras, policing experts say an agency in a typical large city may amass hundreds of thousands of video feeds in just a few years.
Detroit has popularized plug-in surveillance networks through its controversial Project Green Light program. With Project Green Light, businesses can purchase CCTV cameras and connect them to police headquarters. They can also place a bright green light next to the cameras to indicate they are part of the police network. The project claims to deter crime by signaling to residents: The police are watching you.
Detroit is not alone. Chicago, New Orleans, New York, and Atlanta have also deployed plug-in surveillance networks. In these cities, private businesses and/or homes provide feeds that are integrated into crime centers so that police can access live streams and recorded footage. The police department in New Haven, Connecticut, told me they are looking into plug-in surveillance, and others are likely considering it.
The number of cameras on police networks now range from tens of thousands (Chicago) to several hundred (New Orleans). With so many cameras in place, and only a small team of officers to watch them, law enforcement agencies face a new challenge: How do you make sense of all that footage?
The answer is video analytics.
Video Analytics Takes Off
Around 2006, a young Israeli woman was recording family videos every weekend, but as a student and parent, she didn’t have time to watch them. A computer scientist at her university, Professor Shmuel Peleg, told me he tried to create a solution for her: He would take a long video and condense the interesting activity into a short video clip.
His solution failed: It only worked on stationary cameras, and the student’s video camera was moving when she filmed her family.
Peleg soon found another use case in the surveillance industry, which relies on stationary cameras. His solution became BriefCam, a video analytics firm that can summarize video footage from a scene across time so that investigators can view all relevant footage in a short space of time.
Using a feature called Video Synopsis, BriefCam overlays footage of events happening at different times as if they are appearing simultaneously. For example, if several people walked past a camera at 12:30 p.m., 12:40 p.m., and 12:50 p.m., BriefCam will aggregate their images into a single scene. Investigators can view all footage of interest from a given day in minutes instead of hours.
Thanks to rapid advances in artificial intelligence, summarization is just one feature in BriefCam’s product line and the rapidly expanding video analytics industry.
Behavior recognition includes video analytics capabilities like fight detection, emotion recognition, fall detection, loitering, dog walking, jaywalking, toll fare evasion, and even lie detection.
Object recognition can recognize faces, animals, cars, weapons, fires, and other things, as well as human characteristics like gender, age, and hair color.
Anomalous or unusual behavior detection works by recording a fixed area for a period of time — say, 30 days — and determining “normal” behavior for that scene. If the camera sees something unusual — say, a person running down a street at 3:00 a.m. — it will flag the incident for attention.
Video analytics systems can analyze and search across real-time streams or recorded footage. They can also isolate individuals or objects as they traverse a smart camera network.
Chicago; New Orleans; Detroit; Springfield, Massachusetts; and Hartford, Connecticut, are some of the cities currently using BriefCam for policing.
To Search and Surveil
With city spaces blanketed in cameras, and video analytics to make sense of them, law enforcement agencies gain the capacity to record and analyze everything, all the time. This provides authorities the power to index and search a vast database of objects, behaviors, and anomalous activity.
In Connecticut, police have used video analytics to identify or monitor known or suspected drug dealers. Sergeant Johnmichael O’Hare, former Director of the Hartford Real-Time Crime Center, recently demonstrated how BriefCam helped Hartford police reveal “where people go the most” in the space of 24 hours by viewing footage condensed and summarized in just nine minutes. Using a feature called “pathways,” he discovered hundreds of people visiting just two houses on the street and secured a search warrant to verify that they were drug houses.
Video analytics startup Voxel51 is also adding more sophisticated searching to the mix. Co-founded by Jason Corso, a professor of electrical engineering and computer science at the University of Michigan, the company offers a platform for video processing and understanding.
Corso told me his company hopes to offer the first system where people can “search based on semantic content about their data, such as, ‘I want to find all the video clips that have more than 3-way intersections … with at least 20 vehicles during daylight.’” Voxel51 “tries to make that possible” by taking video footage and “turning it into structured searchable data across different types of platforms.”
Unlike BriefCam, which analyzes video using nothing but its own software, Voxel51 offers an open platform which allows third parties to add their own analytics models. If the platform succeeds, it will supercharge the ability to search and surveil public spaces.
Corso told me his company is working on a pilot project with the Baltimore police for their CitiWatch surveillance program and plans to trial the software with the Houston Police Department.
As cities start deploying a wide range of monitoring devices from the so-called internet of things, researchers are also developing a technique known as video analytics and sensor fusion, or VA/SF, for police intelligence. With VA/SF, multiple streams from sensors are combined with video analytics to reduce uncertainties and make inferences about complex situations. As one example, Peleg told me BriefCam is developing in-camera audio analytics that uses microphones to discern actions that may confuse AI systems, such as whether people are fighting or dancing.
VMSs also offer smart integration across technologies. Former New Haven Chief of Police Anthony Campbell told me how ShotSpotters, controversial devices that listen for gunshots, integrate with specialized software so when a gun is fired, nearby swivel cameras instantly alter their direction to the location of the weapons discharge.
Video analytics captures a wide variety of data about the areas covered by smart camera networks. Not surprisingly, the information captured is now being proposed for predictive policing: the use of data to predict and police crime before it happens.
In 2002, the dystopian film “Minority Report” depicted a society using “pre-crime” analytics for police to intervene in lawbreaking before it occurs. In the end, the officers in charge tried to manipulate the system for their own interests.
A real-world version of “Minority Report” is emerging through real-time crime centers used to analyze crime patterns for police. In these centers, law enforcement agencies ingest information from sources like social media networks, data brokers, public databases, criminal records, and ShotSpotters. Weather data is even included for its impact on crime (because “bad guys don’t like to get wet”).
In a 2018 document, the data storage firm Western Digital and the consultancy Accenture predicted mass smart camera networks would be deployed “across three tiers of maturity.” This multi-stage adoption, they contended, would “allow society” to gradually abandon “concerns about privacy” and instead “accept and advocate” for mass police and government surveillance in the interest of “public safety.”
Tier 1 encompasses the present where police use CCTV networks to investigate crimes after-the-fact.
By 2025, society will reach Tier 2 as municipalities transform into “smart” cities, the document said. Businesses and public institutions, like schools and hospitals, will plug camera feeds into government and law enforcement agencies to inform centralized, AI-enabled analytics systems.
Tier 3, the most predictive-oriented surveillance system, will arrive by 2035. Some residents will voluntarily donate their camera feeds, while others will be “encouraged to do so by tax-break incentives or nominal compensation.” A “public safety ecosystem” will centralize data “pulled from disparate databases such as social media, driver’s licenses, police databases, and dark data.” An AI-enabled analytics unit will let police assess “anomalies in real time and interrupt a crime before it is committed.”
That is to say, to catch pre-crime.
Rise of the Video Surveillance Industrial Complex
While CCTV surveillance began as a simple tool for criminal justice, it has grown into a multibillion-dollar industry that covers multiple industry verticals. From policing and smart cities to schools, health care facilities, and retail, society is moving toward near-complete visual surveillance of commercial and urban spaces.
Denmark-based Milestone Systems, a top VMS provider with half its revenues in the U.S., had less than 10 employees in 1999. Today they are a major corporation that claims offices in over 20 countries.
Axis Communications used to be a network printer outfit. They have since become a leading camera provider pushing over $1 billion in sales per year.
BriefCam began as a university project. Now it is among the world’s top video analytics providers, with clients, it says, spanning over 40 countries.
Over the past six years, Canon purchased all three, giving the imaging conglomerate ownership of industry giants in video management software, CCTV cameras, and video analytics. Motorola recently acquired a top VMS provider, Avigilon, for $1 billion. In turn, Avigilon and other large firms have purchased their own companies.
The public is paying for their own high-tech surveillance three times over.
Familiar big tech giants are also in on the action. Lieutenant Patrick O’Donnell of the Chicago police force told me his department is working on a non-disclosure agreement with Google for a video analytics pilot project to detect people reacting to gunfire, and if they are in the prone position, so the police can receive real-time alerts. (Google did not respond to a request for comment.)
Video monitoring networks inevitably entangle and implicate a whole ecosystem of vendors, some of whom have offered, or may yet offer, services specifically targeted at such systems. Microsoft, Amazon, IBM, Comcast, Verizon, and Cisco are among those enabling the networks with technologies like cloud services, broadband connectivity, or video surveillance software.
In the public sector, the National Institute of Standards and Technology is funding “public analytics” and communications networks like the First Responder Network Authority, or FirstNet, for real-time video and other surveillance technologies. FirstNet will cost $46.5 billion, and is being built by AT&T.
Voxel51 is another NIST-backed venture. The public is thus paying for their own high-tech surveillance three times over: first, through taxes for university research; second, through grant money for the formation of a for-profit startup (Voxel51); and third, through the purchase of Voxel51’s services by city police departments using public funds.
With the private and public sector looking to expand the presence of cameras, video surveillance has become a new cash cow. As Corso put it, “there will be something like 45 billion cameras in the world within a few decades. That’s a lot of (video) pixels. For the most part, most of those pixels go unused.” Corso’s estimate mirrors a 2017 forecast from New York venture capital firm LDV, which believes smartphones will evolve to have even more cameras than they do today, contributing to the growth.
Companies that began with markets for police and security are now diversifying their offerings to the commercial sector. BriefCam, Milestone, and Axis advertise the use of video analytics for retailers, where they can monitor foot traffic, queue length, shopping patterns, floor layouts, and conduct A/B testing. Voxel51 has an option built for the fashion industry and plans to expand across industry verticals. Motionloft offers analytics for smart cities, retailers, commercial real estate, and entertainment venues. Other examples abound.
Public and private sector actors are pressing for a world full of smart video surveillance. Peleg, for example, told me of a use case for smart cities: If you drive into the city, you could “just park and go home” without using a parking meter. The city would send a bill to your house at the end of the month. “Of course, you lose your privacy,” he added. “The question is, do you really care about Big Brother knows where you are, what you do, etc.? Some people may not like it.”
How to Rein in Smart Surveillance
Those who do not like new forms of Big Brother surveillance are presently fixated on facial recognition. Yet they have largely ignored the shift to smart camera networks — and the industrial complex driving it.
Thousands of cameras are now set to scrutinize our every move, informing city authorities whether we are walking, running, riding a bike, or doing anything “suspicious.” With video analytics, artificial intelligence is used to identify our sex, age, and type of clothes, and could potentially be used to categorize us by race or religious attire.
Such surveillance could have a severe chilling effect on our freedom of expression and association. Is this the world we want to live in?
The capacity to track individuals across smart CCTV networks can be used to target marginalized communities. The detection of “loitering” or “shoplifting” by cameras concentrated in poor neighborhoods may deepen racial bias in policing practices.
This kind of racial discrimination is already happening in South Africa, where “unusual behavior detection” has been deployed by smart camera networks for several years.
In the United States, smart camera networks are just emerging, and there is little information or transparency about their use. Nevertheless, we know surveillance has been used throughout history to target oppressed groups. In recent years, the New York Police Department secretly spied on Muslims, the FBI used surveillance aircraft to monitor Black Lives Matter protesters, and the U.S. Customs and Border Protection began building a high-tech video surveillance “smart border” across the Tohono O’odham reservation in Arizona.
Law enforcement agencies claim smart camera networks will reduce crime, but at what cost? If a camera could be put in every room in every house, domestic violence might go down. We could add automated “filters” that only record when a loud noise is detected, or when someone grabs a knife. Should police put smart cameras inside every living room?
The commercial sector is likewise rationalizing the advance of surveillance capitalism into the physical domain. Retailers, employers, and investors want to put us all under smart video surveillance so they can manage us with visual “intelligence.”
When asked about privacy, several major police departments told me they have the right to see and record everything you do as soon as you leave your home. Retailers, in turn, won’t even approach public disclosure: They are keeping their video analytics practices secret.
In the United States, there is generally no “reasonable expectation” of privacy in public. The Fourth Amendment encompasses the home and a few public areas we “reasonably” expect to be private, such as a phone booth. Almost everything else — our streets, our stores, our schools — is fair game.
Even if rules are updated to restrict the use of video surveillance, we cannot guarantee those rules will remain in place. With thousands of high-res cameras networked together, a dystopian surveillance state is a mouse click away. By installing cameras everywhere, we are opening a Pandora’s box.
To address the privacy threats of smart camera networks, legislators should ban plug-in surveillance networks and restrict the scope of networked CCTVs beyond the premise of a single site. They should also limit the density of camera and sensor coverage in public. These measures would block the capacity to track people across wide areas and prevent the phenomenon of constantly being watched.
The government should also ban video surveillance analytics in publicly accessible spaces, perhaps with exceptions for rare cases such as the detection of bodies on train tracks. Such a ban would disincentivize mass camera deployments because video analytics is needed to analyze large volumes of footage. Courts should urgently reconsider the scope of the Fourth Amendment and expand our right to privacy in public.
Police departments, vendors, and researchers need to disclose and publicize their projects, and engage with academics, journalists, and civil society.
It is clear we have a crisis in the works. We need to move beyond the limited conversation of facial recognition and address the broader world of video surveillance, before it is too late.
We're taking part in Copyright Week, a series of actions and discussions supporting key principles that should guide copyright policy. Every day this week, various groups are taking on different elements of copyright law and policy, addressing what's at stake and what we need to do to make sure that copyright promotes creativity and innovation.
The USMCA requires participating countries to have a copyright term of at least 70 years. In practice, this measure will affect only Canada: in the United States, copyright lasts 70 years already, and in Mexico it’s even longer (100 years). Only Canada has stuck with the 50-year minimum required by the Berne Convention.
It’s a common story: again and again, trade agreements bring longer copyright terms to participating countries under the banner of standardization, often with the United States. But that “standardization” only takes place in one direction, toward more restrictive copyright laws. The failed Trans-Pacific Partnership Agreement (TPP) would have lengthened copyright terms for several participating countries. It also would have brought US copyright’s protection for digital locks to participating countries.
The USMCA is just the latest example: when copyright terms are negotiated in private, multinational agreements, it tends to favor the interests of large media companies. Countries should decide their own copyright laws by inclusive, democratic processes, not through secret negotiations.
While Diego’s story has become a rallying cry for advocacy for open access to research, it’s important for another reason too. It shows the dangerous consequences of copyright-expanding trade agreements. The law Diego was tried under had a sentencing requirement that lawmakers passed in order to comply with a trade agreement with the U.S.
Trade agreements that expand copyright almost never carry requirements that participating nations honor limitations on copyright like fair use or fair dealing, leaving many countries with strong protection for large rights-holders and weak protection for their citizens’ rights.
Copyright should not be a global monolith. Differences between countries’ copyright laws are a feature, not a bug. In implementing copyright law, lawmakers should carefully balance the rights of copyright holders with the rights of the public to use and build upon copyrighted works. Lawmakers can’t make that balance when their trade negotiators have already given the public’s rights away.
Today, we are told that the bigness of Big Tech giants was inevitable: the result of "network effects." For example, once everyone you want to talk to is on Facebook, you can't be convinced to use another, superior service, because all the people you'd use that service to talk to are still on Facebook. And of course, those people also can't leave Facebook, because you're still there.
But network effects were once a double-edge sword, one that could be wielded both by yesterday's Goliaths and today's Davids. Once, network effects made companies vulnerable, just as much as they protected them.
The early, pre-graphic days of the Internet were dominated by Usenet, a decentralized, topic-based discussion-board system that ran on UUCP -- AT&T's Unix-to-Unix Copy utility -- that allowed administrators of corporate servers to arrange for their computers to dial into other organizations' computers and exchange stored messages with them, and to pass on messages that were destined for more distant systems. Though UUCP was originally designed for person-to-person messaging and limited file transfers, the administrators of the world's largest computer systems wanted a more freewheeling, sociable system, and so Usenet was born.
Usenet systems dialed each other up to exchange messages, using slow modems and commercial phone lines. Even with the clever distribution system built into Usenet (which allowed for one node to receive long-distance messages for its closest neighbors and then pass the messages on at local calling rates), and even with careful call scheduling to chase the lowest long-distance rates in the dead of night, Usenet was still responsible for racking up some prodigious phone bills for the corporations who were (mostly unwittingly) hosting it.
The very largest Usenet nodes were hosted by companies so big that their Usenet-related long distance charges were lost in the dictionary-sized bills the company generated every month (some key nodes were operated by network administrators who worked for phone companies where long-distance calls were free).
The administrators of these key nodes semi-jokingly called themselves "the backbone cabal" and they saw themselves as having a kind of civic duty to Usenet, part of which was ensuring that their bosses never got wind of it and (especially) that Usenet never created the kind of scandal that would lead to public outcry that would threaten the project.
Which is why the backbone cabal was adamant that certain discussion forums be suppressed. Thanks to a convention proposed by EFF co-founder John Gilmore, there was a formal process for creating a Usenet newsgroup, requiring that a certain number of positive votes be cast for the group's creation by Usenet's users, and that this positive force not be checked by too many negative votes. Though this compromise stacked the deck against controversy by allowing a critical mass of objectors to block even very popular proposals, some proposed controversial newsgroups made it through the vote.
When that happened, the backbone cabal response was to "protect Usenet from its own users," by refusing to carry these controversial newsgroups on their long-haul lines, meaning that all the local systems (who depended on on the backbone to serve up UUCP feeds without long-distance fees) would not be able to see them. It was a kind of network administrator's veto.
Usenet users chafed at the veto. Some of the "controversial" subjects the cabal blocked (like recreational drugs) were perfectly legitimate subjects of inquiry; in other cases (rec.gourmand -- a proposal for a group about cooking inside the "recreation" category, rather than the "talk" category), the cabal's decision was hard to see as anything but capricious and arbitrary.
In response, John Gilmore, Gordon Moffett and Brian Reid created a new top-level category in the Usenet hierarchy: alt., and in 1987, the first alt. newsgroup was formed: alt.gourmand.
The backbone did not carry the alt. hierarchy, but that wasn't the end of things. Gilmore was willing to subsidize the distribution of the alt. hierarchy, and he let it be known that he would pay the long distance charges to have his UUCP server dial up to distant systems and give them an alt. feed. Because UUCP allowed for the consolidation of feeds from multiple sources, Usenet users could get their regular Usenet feeds from the backbone cabal, and their alt. feeds from Gilmore; as time went by and new services like Telenet provided new ways of bridging systems that were cheaper than long-distance modem calls, and as the modems themselves got faster, and an Internet protocol for Usenet messages called NNTP was created and the alt. hierarchy became the most popular part of Usenet.
The crisis that the backbone cabal had feared never materialized. The alt. hierarchy's freewheeling rules -- that let anyone add any newsgroup without permission from third parties -- came to dominate the Internet, from the Web (anyone can add a website) to its many services (anyone can add a hashtag or create a social media group).
The story of the alt. hierarchy is an important lesson about the nearly forgotten art of "adversarial interoperability," in which new services can be plugged into existing ones, without permission or cooperation from the operators of the dominant service.
Today, we're told that Facebook will dominate forever because everyone you want to talk to is already there. But that was true of the backbone cabal's alt.-free version of Usenet, which controlled approximately one hundred percent of the socializing on the nascent Internet. Luckily, the alt. hierarchy was created before Facebook distorted the Computer Fraud and Abuse Act to try to criminalize terms of service violations. Usenet had no terms of service and no contracts. There were only community standards and mores, endlessly discussed. It was created in an era when software patents were rare and narrow, before the US Patent and Trademark Office started allowing patents on anything so long as you put "with a computer" in the application – a few years later, and Usenet creators might have tried to use Duke University and UNC’s patent portfolio to try to shut down anyone who plugged something as weird, dangerous and amazing as alt. into the Usenet (wags insisted that alt. didn't stand for "alternative," but rather, "Anarchists, Lunatics, and Terrorists"). As alt. grew, its spread demanded that Usenet's software be re-implemented for non-Unix computers, which was possible because software interfaces were not understood to be copyrightable – but today, Oracle is seeking to have the courts seal off that escape hatch for adversarial interoperability.
Deprived of these shields against adversarial interoperability, Usenet's network effects were used against it. Despite being dominated by the backbone cabal, Usenet had everything the alt. hierarchy needed to thrive: the world's total population of people interested in using the Internet to socialize; that meant that the creators of alt. could invite all Usenet users and to expand their reading beyond the groups that met with the cabal's approval without having to get the cabal's permission. Thanks to the underlying design of Usenet, the new alt. groups and the incumbent Usenet newsgroups could be seamlessly merged into a system that acted like a single service for its users.
If adversarial interoperability still enjoyed its alt.-era legal respectability, then Facebook alternatives like Diaspora could use their users' logins and passwords to fetch the Facebook messages the service had queued up for them and allow those users to reply to them from Diaspora, without being spied on by Facebook. Mastodon users could read and post to Twitter without touching Twitter's servers. Hundreds or thousands of services could spring up that allowed users different options to block harassment and bubble up interesting contributions from other users -- both those on the incumbent social media services, and the users of these new upstarts. It's true that unlike Usenet, Facebook and Twitter have taken steps to block this kind of federation, so perhaps the experience won't be as seamless as it was for alt. users mixing their feeds in with the backbone's feeds, but the main hurdle – moving to a new service without having to convince everyone to come with you – could be vanquished.
In the absence of adversarial interoperability, we're left trying to solve the impossible collective action problem of getting everyone to switch at once, or to maintain many different accounts that reach many different groups of potential users.
Regulators are increasingly bullish on interoperability and have made noises about creating standards that let one service plug into another one. But as important as these standards are, they should be the floor on interoperability, not the ceiling. Standards created with input from the tech giants will always have limits designed to protect them from being disrupted out of existence, the way they disrupted the market leaders when they were pipsqueak upstarts.
Restoring adversarial interoperability will allow future companies, co-operatives and tinkerers to go beyond the comfort zones of the winners of the previous rounds of the game -- so that it ceases to be a winner-take-all affair, and instead becomes the kind of dynamic place where a backbone cabal can have total control one year, and be sidelined the next.
Do you use CKAN to power an open data portal? In this guest post Link Digital explains how you can take advantage of their latest open data initiative Datashades.info.
Datashades.info is a tool designed to deliver insights for researchers, portal managers, and the wider tech community to inform and support open data efforts relating to data hosted on CKAN platforms.
Link Digital created the online service through a number of alpha releases and considers datashades.info, now in beta, as a long term initiative they expect to improve with more features in future releases.
Specifically, Datashades.info provides a publicly-accessible index of metadata and statistics on CKAN data portals across the globe. For each portal, a number of statistics are aggregated and presented surrounding number of datasets, users, organisations and dataset tags. These statistics give portal managers the ability to quickly compare the size and scope of CKAN data portals to help inform their development roadmaps. Moreover, for each portal, installed plugin information is collected along with the relative penetration of those plugins across all portals in the index. This will enable CKAN developers to quickly see what extensions are the most popular and on what portals they are being used. Finally, all historical data is persisted and kept publically accessible, allowing researchers to analyse historical data trends in any indexed CKAN portal.
Datashades.info was built to support a crowd-sourced indexing scheme. If a visitor searches for a CKAN portal and it is not found within the index, the system will immediately query that portal and attempt to generate a new index entry on-the-fly. Aggregation of a new portal’s statistics into Datashades.info also happens automatically.
Maximise the tool and gain interesting information with the following features:
Globally Accessible open data
With Datashades.info, you can easily access an index of metadata and statistics on CKAN data portals across the globe. To do this, simply type in the portal’s URL on the homepage then click “Search“.
Integrated Values of All Metrics
After entering a portal’s URL, Datashades.info will load its information. After a few seconds, you will be able to see a range of data on portal users, datasets, resources, organisations, tags and plugins. Portal managers can access these via the individual portal page found on the site.
Easily-tracked Historical Data
Want to revisit data you previously explored? The tool also keeps old data in a historical index which users can explore any time on any portal page or by clicking “View All Data Portals” on the homepage.
Datashades.info uses crowdsourcing to build its index. This means users can easily add any CKAN data portal not found on the site. To do this, simply search for a portal you know and it’ll be automatically added to the site and global statistics.
As the project remains at a beta level of maturity, it is still wanting of improvements in many areas. But with the continuous feedback coming from the CKAN community, expect that more data and features will be added in future releases. For now, have a look around and stay tuned!
Every so often, I ask folk in the department when they last wrote any code; often, I get blank stares back. Write code? Why would they want to do that? Code is for the teaching of, and big software engineering projects, and, and, not using it every day, surely?
I see code as a tool for making tools, often disposable ones.
Here’s an example…
I’m writing a blog post, and I want to list the file types recognised by Jupytext. I can’t find a list of the filetypes it recognises as a simple string that I can copy and paste into the post, but I do find this:
Copying out those suffixes is a pain, so I just copy that text string, which in this case happens to play nicely with Python (because it is Python), sprinkle a bit of code:
and here’s the list of filetypes supported by Jupytext: .py, .R, .r, .jl, .cpp, .ss, .clj, .scm, .sh, .q, .m, .pro, .js, .ts, .scala.
Note that is doesn’t have to be nice code, and there may be multiple ways of solving the problem (in the example, I use a hybrid “me + the computer” approach where I get the code to do one thing, I copy the output, paste that into the next cell and then hack code around that, as well as “just the computer” approach. The first one is perhaps more available to a novice, the second to someone who knows about .join()).
I tend use code without thinking anything special of it; it’s just a tool that’s to hand to fashion other tools from, and I think that colours my attitude towards the way in which we teach it.
First and foremost, if you come out of a coding course not thinking that you now have a skill you can use quite casually to help get stuff done, you’ve been mis-sold…
This blog post took much longer to write than it took me to copy the _SCRIPT_EXTENSIONS text and write the code to extract the list of suffixes… And it didn’t take long to write the post at all…