Contrary to the inviting “Sounds good” button to accept the new policy and get to tweeting, the changes Twitter has made around user tracking and data personalization do not sound good for user privacy. For example, the company will now record and store non-EU users’ off-Twitter web browsing history for up to 30 days, up from 10 days in the previous policy.
Worst of all, the “control over your data” promised by the pop-up is on an opt-out basis, giving users choices only after Twitter has set their privacy settings to invasive defaults.
Instead, concerned users have to click “Review settings” to opt out of Twitter’s new mechanisms for user tracking. That will bring you to the “Personalization and Data” section of your settings. Here, you can pick and choose the personalization, data collection, and data sharing you will allow—or, click “Disable all” in the top-right corner to opt out entirely.
While you’re at it, this is also a good opportunity to review, edit, and/or remove the data Twitter has collected on you in the past by going to the “Your Twitter data” section of your settings.
Twitter has stated that these granular settings are intended to replace Twitter’s reliance on Do Not Track. However, replacing a standard cross-platform choice with new, complex options buried in the settings is not a fair trade. Although “more granular” privacy settings sound like an improvement, they lose their meaning when they are set to privacy-invasive selections by default. Adding new tracking options that users are opted into by default suggests that Twitter cares more about collecting data than respecting users’ choice.
Technological advances change the world. That's partly because of what they are, but even more because of the social changes they enable. New technologies upend power balances. They give groups new capabilities, increased effectiveness, and new defenses. The Internet decades have been a never-ending series of these upendings. We've seen existing industries fall and new industries rise. We've seen governments become more powerful in some areas and less in others. We've seen the rise of a new form of governance: a multi-stakeholder model where skilled individuals can have more power than multinational corporations or major governments.
Among the many power struggles, there is one type I want to particularly highlight: the battles between the nimble individuals who start using a new technology first, and the slower organizations that come along later.
In general, the unempowered are the first to benefit from new technologies: hackers, dissidents, marginalized groups, criminals, and so on. When they first encountered the Internet, it was transformative. Suddenly, they had access to technologies for dissemination, coordination, organization, and action -- things that were impossibly hard before. This can be incredibly empowering. In the early decades of the Internet, we saw it in the rise of Usenet discussion forums and special-interest mailing lists, in how the Internet routed around censorship, and how Internet governance bypassed traditional government and corporate models. More recently, we saw it in the SOPA/PIPA debate of 2011-12, the Gezi protests in Turkey and the various "color" revolutions, and the rising use of crowdfunding. These technologies can invert power dynamics, even in the presence of government surveillance and censorship.
But that's just half the story. Technology magnifies power in general, but the rates of adoption are different. Criminals, dissidents, the unorganized -- all outliers -- are more agile. They can make use of new technologies faster, and can magnify their collective power because of it. But when the already-powerful big institutions finally figured out how to use the Internet, they had more raw power to magnify.
This is true for both governments and corporations. We now know that governments all over the world are militarizing the Internet, using it for surveillance, censorship, and propaganda. Large corporations are using it to control what we can do and see, and the rise of winner-take-all distribution systems only exacerbates this.
This is the fundamental tension at the heart of the Internet, and information-based technology in general. The unempowered are more efficient at leveraging new technology, while the powerful have more raw power to leverage. These two trends lead to a battle between the quick and the strong: the quick who can make use of new power faster, and the strong who can make use of that same power more effectively.
This battle is playing out today in many different areas of information technology. You can see it in the security vs. surveillance battles between criminals and the FBI, or dissidents and the Chinese government. You can see it in the battles between content pirates and various media organizations. You can see it where social-media giants and Internet-commerce giants battle against new upstarts. You can see it in politics, where the newer Internet-aware organizations fight with the older, more established, political organizations. You can even see it in warfare, where a small cadre of military can keep a country under perpetual bombardment -- using drones -- with no risk to the attackers.
This battle is fundamental to Cory Doctorow's new novel Walkaway. Our heroes represent the quick: those who have checked out of traditional society, and thrive because easy access to 3D printers enables them to eschew traditional notions of property. Their enemy is the strong: the traditional government institutions that exert their power mostly because they can. This battle rages through most of the book, as the quick embrace ever-new technologies and the strong struggle to catch up.
It's easy to root for the quick, both in Doctorow's book and in the real world. And while I'm not going to give away Doctorow's ending -- and I don't know enough to predict how it will play out in the real world -- right now, trends favor the strong.
Centralized infrastructure favors traditional power, and the Internet is becoming more centralized. This is true both at the endpoints, where companies like Facebook, Apple, Google, and Amazon control much of how we interact with information. It's also true in the middle, where companies like Comcast increasingly control how information gets to us. It's true in countries like Russia and China that increasingly legislate their own national agenda onto their pieces of the Internet. And it's even true in countries like the US and the UK, that increasingly legislate more government surveillance capabilities.
At the 1996 World Economic Forum, cyber-libertarian John Perry Barlow issued his "Declaration of the Independence of Cyberspace," telling the assembled world leaders and titans of Industry: "You have no moral right to rule us, nor do you possess any methods of enforcement that we have true reason to fear." Many of us believed him a scant 20 years ago, but today those words ring hollow.
But if history is any guide, these things are cyclic. In another 20 years, even newer technologies -- both the ones Doctorow focuses on and the ones no one can predict -- could easily tip the balance back in favor of the quick. Whether that will result in more of a utopia or a dystopia depends partly on these technologies, but even more on the social changes resulting from these technologies. I'm short-term pessimistic but long-term optimistic.
The web is not the traditional home of data visualization. You might come across a bar chart here or there in your online journey on any given day, but they’ve never been an artifact of web history. It seems like that’s been changing.
With the world becoming increasingly data-driven, we’re seeing more and more visualizations make their way onto our web pages and into our design briefs. They help us tell stories that better engage our users, and can even get them to take some kind of meaningful action.
The problem is that these datasets—sometimes so large they’re literally called “big data”—can make visualization with meaning difficult. But that’s something we as designers are equipped to tackle. We just have to know what our users are hoping to gain from viewing and interacting with visualizations, and what we have to do to make their effort worthwhile.
Data has a very strong power to persuade—powerful enough to change users’ everyday behavior, especially when data is informative, clear, and actionable. We should be putting data visualizations to work on our sites, enhancing our designs to show users how data is in service to the story they’ve come to learn about.
Data visualization on the web can be meaningful through allowing people to discover the smaller stories that resonate with them, customizing their user experience instead of putting them on a predetermined path.
Users attempting to interact with large and generally disconnected sets of data while navigating a site or trying to access relevant information end up facing a difficult, if not impossible, task. Our sites lose a certain measure of usability if they aren’t well-designed, even though the web is a natural medium for delivering truly interactive data.
As with all design, the approach we take when creating a user-minded visualization is based on the context and the constraints we have to work with. Good data visualizations—those with meaning—need to be accessible and human even though data is rarely described with those words.
Telling a story
The key to designing visualizations is to focus on something in the dataset that is relatable to and resonates with your users. I stumbled upon this while creating a visualization from the publicly available Open Food Facts dataset, which contains crowd-sourced information on food products from all over the world.
Although the dataset covers an extensive range of information (even down to packaging materials and number of additives), I chose to focus on comparing average sugar consumption among different countries (Fig. 1) because I was personally concerned about that topic. It turned out to be a concern for others as well and became the most popular project for the dataset on Kaggle.
Even though I didn’t make extensive use of the dataset in my rough and ugly visualization, what I chose to focus on told a story that resonated with people because most were from the countries listed or had a growing general awareness of high sugar consumption and its effect on health. In retrospect, what’s more personal and important than your health?
Selecting data points that strengthen a story with a positive result (whether that’s eating less sugar or reducing large-scale chemical emissions) can be great, but it’s important to present a story that is as unbiased as possible and to make ethical decisions about which parts of the data we want to use while telling the story.
But what exactly is a story in the context of a data visualization? We can’t kick it off with “once upon a time,” so we have to approach the idea in a different way.
Stories can make you question the state of a situation.
Addressing some or all of these attributes is a particular challenge for big datasets because the sheer amount of information can make finding a narrative difficult. But big or not, the principles remain the same. Visualizing any kind of data-driven story that resonates can have a powerful influence on users’ decisions.
It also stirs other questions the user might ask.
For instance, why do certain countries consume higher quantities of sugar? Are they the ones we expected? The information could challenge an assumption or two someone may have had prior to seeing the results. Just remember that visualization can be a stepping stone to further discovery, increasing the user’s knowledge and possibly affecting their everyday choices going forward.
If you’re trying to embed meaning into a large visualization through the story of a dataset’s subsection, it’s important to:
Discover what your users care about in the dataset. Make it relevant to their personal needs, desires, and interests.
Focus on that subsection ruthlessly. Get rid of anything that doesn’t further the story your visualization is telling.
Take care to make ethical, unbiased decisions about which data points you use to create visualizations that might influence your users.
Be careful not to give people all the answers; allow them to ask their own questions and make their own discoveries about the data.
This approach allows you to create something that not only resonates at a personal level, but also presents meaning in a way that encourages and allows users to take action.
But we already have a story
Though large, some big datasets already revolve around a single story. An interesting way of dealing with this particular issue is to simultaneously display different aspects of such a dataset, allowing the user to discover that meaning. This is called the “small multiples” technique. (Fig. 2)
The cluster of visualizations above, for example, deals with the “story” of memory stall issues on a computer. What I find interesting about the cluster is that the heading of every visualization starts with some variation of “memory stall time.” Despite being separate visualizations, they are linked by the single story they tell and they’re presenting it from simultaneous, distinct perspectives.
It’s possible for perspectives to look completely different from one another if they visualize different kinds of data. For instance, bar charts and area charts can harmoniously coexist if the representations are appropriate for the data they’re showing. The Australian Census Explorer illustrates how this might work (Fig. 3). It allows the user to establish their own narrative through choice of topic, such as language or place.
Framing visualizations around a personal topic (like someone’s native language) affects all associated small multiples appropriately; reframing serves to personalize the data. (Fig. 4)
Storytelling through interaction
It can be very useful with this approach to include an interaction in one design that is capable of affecting the others—something to help the user see relationships between data points they might not have considered before. This example from essay site Polygraph shows all Kickstarter projects across space, organized here by category and American city. (Fig. 7)
The visualization is particularly interesting because it allows users to view the relationship of one variable (in this case, the project category) to others, such as American cities or project sizes. (Notice the prevalence of music projects in Nashville and game projects in Austin and Seattle).
This can be even more effective for small multiples shown across time. Fig. 9 shows how this approach is used on a fund manager’s website. Changing the time period of an investment fund’s performance also shows how risk rating and the growth of an investment change during that period. By leveraging intuitive web animation, we can view snapshots of the data at precise moments in time.
If the dataset is already centered around some kind of overarching story, it can be a good idea to:
Display different parts of the dataset in separate visualizations simultaneously
Treat these separate visualizations as individuals tailored to the data they’re presenting. (Bar charts and area charts can live together in harmony if the data makes it appropriate.)
If there is interaction, ensure that it affects the entirety of your visualization approach so that the relationships between data points are more apparent
Apply well-considered web animation techniques to ensure that the interaction is intuitive.
There are too many stories
What do we do when a dataset doesn’t have a single, big story to tell, yet we still need to visualize everything in it?
Although some datasets lack a specific focus (e.g., “memory stall time,” “fund performance,” or “all-Kickstarter-projects-ever”), data points may have internal relationships that reveal bite-sized stories. How do we create actionable meaning for those visualizations?
Simply showing data as-is, even in a visualization that seems to fit, rarely works well. In Fig. 10 we see relationships between Python code packages, but in a way that’s just as messy and incoherent as the data in its natural state. The lack of focus and narrative is notable. (That said, the dataset is extremely large, so a single narrative isn’t actually possible.)
Since a single story isn’t possible in this situation, a better approach is to allow users to discover their own story. Your job is to facilitate that via the interaction design of the visualization.
Again, at first glance the visualization seems to be messy and incoherent—but look closer. Users can investigate any individual package of code, including its personal relationships (listed in the bottom left). A handy search bar has also been incorporated in the top left corner.
What makes this particular visualization more meaningful is that the user can explore it in 3D space via keyboard and mouse. Leveraging this uniquely digital capability in the browser allows users to start discovering their own story in the enormous swarm of data, “moving” toward areas in the visualization that they find more relevant to their interests or needs. (Fig. 12)
Once the user finds a package or groups of packages they’re interested in exploring, they can click on one for a specific and focused view of the package in isolation, including its relationships with other packages. A full breakdown of these relationships is posted on the left of the screen, including visual nodes linking directly to the Github page for that code package. (Fig. 13)
This visualization, like the one shown before it, uses the idea of a network in order to display the immensity of the data, but it also uses intuitive interaction and lets the user explore in order to extract personally relevant meaning. It uses the modern advantages of the web to deal with the modern problems of big datasets, much like the following visualization from OpenCorporates. (Fig. 14)
This design allows users to zero in on data they care about, choosing where they go and which breadcrumbs offer meaningful insight.
If a dataset needs to be fully visualized but has smaller stories within it, it may be useful to:
Show all data, but give users the ability to create chunks or segments they wish to explore
Leverage the advantages of being digital. For example, explore how input devices (e.g., keyboard and mouse) can facilitate how users interact with the data.
Use visual metaphors that support extensive and intricate relationship associations, such as a tree or network.
Visualization with meaning
Data is powerful in the right hands, and something we’re skilled at presenting in our websites. But toss in words like “big data” or “data visualization” and we second-guess ourselves instead of owning it as part of our workflow. The web is actually a great place for data visualization.
Leveraging the benefits of “digital” environments and tools, we can help users get what they need from large, complicated datasets. They are looking for insights, for meaningful information presented simply, for stories that resonate—for data stories they care about. We can help them find those stories by blending in a few new techniques on our end, such as sub-selections of data, use of small multiples to show relationships between data points, or even allowing user-driven focus on the full dataset.
Hoje, o InternetLab, um dos principais centros de pesquisa independente em políticas de Internet no Brasil, lançou seu relatório de 2017 sobre companhias de telecomunicação locais e como elas lidam com as informações privadas de seus clientes. “Quem defende seus dados?” procura encorajar as companhias a competir pelos usuários por mostrar quem se compromete a proteger a privacidade e os dados de seus clientes. É por isso que o InternetLab avaliou as políticas das mais importantes empresas de telecomunicação brasileiras para verificar o seu comprometimento com a privacidade dos usuários quando o Estado pede informações pessoais de seus clientes.
Esse relatório faz parte de uma iniciativa sul-americana por parte dos principais grupos de direito digital do continente para esclarecer as práticas de políticas de Internet na região, baseado no relatório anual da EFF chamado “Who Has Your Back”. Na última semana, a organização TEDIC, do Paraguai e a Derechos Digitales, do Chile, lançaram seus respectivos relatórios. Grupos digitais da Colômbia, México e Argentina também irão lançar estudos similares em breve.
O InternetLab escolheu as empresas provedoras de Internet que, de acordo com dados publicados pela ANATEL (Agência Nacional de Telecomunicações) em Outubro de 2016, possuem pelo menos 10% de todos os acessos de Internet no Brasil -- seja por banda larga ou telefonia móvel. Assim, “Quem defende seus dados?” inclui um time de companhias que são responsáveis por 90% das conexões de Internet no Brasil -- NET, Oi e Vivo (banda larga) e Claro, Oi, TIM e Vivo (Internet móvel). Juntos, os registros dessas empresas possuem informações íntimas dos movimentos e relacionamentos de quase todos os cidadãos do país.
O InternetLab desenvolveu sua própria metodologia para abarcar as especificidades sociais e legais do Brasil, focando em (1) comprometimento público com o cumprimento da lei; (2) adoção de práticas e políticas pró-usuário; e (3) transparência sobre práticas e políticas. O relatório promove a transparência e as melhores práticas no campo da privacidade e proteção de dados, empoderando usuários de Internet por meio da educação sobre suas escolhas como consumidores.
Cada companhia foi avaliada em seis categorias:
Informações sobre tratamento de dados: O provedor de Internet fornece informações claras e completas sobre coleta, uso, armazenamento, tratamento e proteção de dados?
Informações sobre condições de entrega de dados a agentes do Estado: O provedor de Internet promete entregar dados cadastrais e registros de conexão apenas mediante ordem judicial, e dados cadastrais, por requisição, apenas a autoridades administrativas competentes?
Defesa da privacidade dos usuários no Judiciário: O provedor de Internet contestou judicialmente pedidos de dados abusivos ou legislação que considera invadir a privacidade de usuários?
Posicionamento público pró-privacidade: O provedor de Internet se posicionou publicamente sobre projetos de lei e políticas públicas que afetam a privacidade dos usuários, defendendo dispositivos que melhoram a proteção desse direito?
Relatório de transparência sobre pedidos de dados: A empresa publica relatórios de transparência, informando quantas vezes recebeu pedidos de dados por autoridades estatais e quantas vezes entregou?
Notificação do usuário: A empresa notifica usuários quando recebe pedidos de dados?
Abaixo, veja o ranking das empresas de telecomunicações brasileiras:
Desde o primeiro relatório do InternetLab, apareceram sinais de melhoras. Neste ano, a Vivo foi a única companhia a receber uma estrela cheia por informar seus clientes sobre práticas de proteção de dados e também por publicar um relatório de transparência. Essas foram as primeiras estrelas cheias nessas categorias. Além disso, o InternetLab deu estrelas cheias para Claro, Oi e TIM por lutar pelos direitos de seus usuários no Judiciário; no ano passado, apenas a TIM havia conquistado a estrela completa. As divisões móveis da Vivo e da TIM rivalizaram pelo primeiro lugar, ambas com 3 ¾ estrelas.
No entanto, em 2017, nenhuma empresa recebeu uma estrela cheia por possuir um compromisso com revelar dados pessoais e registros de conexão apenas frente a uma ordem judicial ou, no caso de dados pessoais, frente a um pedido feito pelas autoridades administrativas competentes. Ano passado, o InternetLab havia dado estrelas cheias para duas companhias na então versão desta categoria. E, mais uma vez, nenhuma empresa ganhou créditos por fornecer aos seus clientes notificações sobre pedidos de dados pelo governo.
Apesar do progresso inquestionável, ainda há um espaço significativo para melhora. O InternetLab convida as companhias a desenvolver políticas de privacidade para que os usuários possam entender como seus dados pessoais são tratados, como manda o Marco Civil da Internet, e como as empresas provedoras de Internet lidam com demandas de informações vindas do governo. O InternetLab também encoraja as companhias a usarem as “salas de imprensa” em seus sites para listar suas ações em defesa da privacidade e da proteção de dados nos tribunais e em debates públicos. Por fim, o InternetLab também incentiva as empresas a publicar relatórios de transparência e a adotar práticas de notificação do usuário.
Hmm. This was interesting up until it got into the evo-psych explanation at the end. Yes, we probably did evolve to dismiss ideas that threaten our core beliefs. But like almost any pattern of thought, it is both a consistent bias and a reasonable thing to do most of the time.
It is completely rational to be slow to change your beliefs when faced with new evidence. It is also completely rational to respond to intellectual threats with hostility and retrenchment. At the same time, these completely rational responses can also lead us to reject compelling evidence in some circumstances.
Why are these responses rational? First, because conversation, books, articles, and speeches are all very weak evidence. I've got a mountain of experience behind me informing my beliefs. And somebody has just added a metaphorical pebble. Occasionally, it might cause a landslide resulting in a profound change in my understanding of the world. But most of the time it does not. If somebody changes their mind every time they hear an assertion, we tend to think of them as foolish and credulous. And for good reason.
I have heard assertions throughout my life. Some of them were true and many of them were false. I have to evaluate this assertion to get a sense of how probable it is.
Second, There is no fundamental way for me to perceive whether an assertion is true or false just by looking at it. Instead, I have to make a judgement based on the my previous beliefs and experiences (also known as pre-existing biases) and how much I trust the person making the assertion (also known as accepting an argument from authority). In a formal logic sense, both of these are clearly fallacies. But they are all I have to go on. So I have to use these tools to make sense of the assertion. If the assertion fits comfortably in the house of my core beliefs and I have a certain amount of trust in the asserter, then I might be willing to accept it. Otherwise, I will be likely to reject it.
Now let's say that somebody said something that sounds absurd. 'Absurd' is just another name for something that doesn't fit my core beliefs. In that case, I would likely reject it out of hand. And I would also feel threatened. I would likely downgrade the source. I might even become angry because a common cause of false assertions is that somebody is trying to trick me. Maybe they want me to look foolish or they want to defraud me.
Now the difficulty is that these reactions are reasonable whether or not my core beliefs are true. So if I come to have a core set of beliefs that happens to be incorrect (which is almost certainly true to some extent or another), then this rule of thumb can prevent me from replacing them with better core beliefs that are more true.
I think that the answer has to be somewhere in the middle. We should not be changing our beliefs every time we hear a countervailing assertion. But it is also important to seek out different perspectives on those beliefs. If there are no fresh inlets, our core beliefs will be stagnant intellectual swamps. But if we accept new assertions too readily, then we become a river with each new idea passing through and then replaced by the next and we have no possibility of retaining truth.
And it is important to use this kind of thought process as a form of self-improvement rather than as an argumentative bludgeon. It is far too easy to read about some fallacy or bias and then use it as a reason to find your opponent 'irrational' rather than using it as a tool for yourself.
This New York Times article gets a lot wrong, and both podcast listeners and podcast producers should be clear on Apple’s actual role in podcasting is today and what, exactly, big producers are asking for.
Podcasts work nothing like the App Store, and we’re all better off making sure they never head down that road.
Podcasts still work like old-school blogs:
Each podcast can be hosted anywhere and completely owned and controlled by its producer.
Podcast-player apps periodically check each subscribed podcast’s RSS feed, and when a new episode is published, they fetch the audio file directly from the producer’s site or host.
Monetization and analytics are completely up to the podcasters.
Some podcasts have their own custom listening apps that provide their creators with more data and monetization opportunities.
It’s completely decentralized, free, fair, open, and uncontrollable by any single entity, as long as the ecosystem of podcast-player apps remains diverse enough that no app can dictate arbitrary terms to publishers (the way Facebook now effectively controls the web publishing industry).1
Apple holds two large roles in podcasting today that should threaten its health, but haven’t yet:
The biggest player app: Apple’s built-in iOS Podcasts app is the biggest podcast player in the world by a wide margin, holding roughly 60–70% marketshare.
The biggest podcast directory: The iTunes Store’s Podcasts directory is the only one that matters, and being listed there is essential for podcasts to be easily found when searching in most apps.
Critically, despite having these large roles, Apple never locked out other players, dictated almost any terms to podcasters,2 or inserted themselves as an intermediary beyond the directory stage.
Like most of the iTunes Store, the podcast functionality has been almost completely unchanged since its introduction over a decade ago. And unlike the rest of the Store, we’re all better off if it stays this way.
Apple’s directory gives podcast players the direct RSS feed of podcasts found there, and then the players just fetch directly from the publisher’s feeds from that point forward. Apple is no longer a party to any activity after the search unless you’re using Apple’s player app.
There’s nothing stopping anyone else from making their own directory (a few have), and any good podcast player will let users bypass directories and subscribe to any podcast in the world by pasting in its URL.
Apple’s editorial features are unparalleled in the industry. I don’t know of anyone who applies more human curation to podcasts than Apple.
The algorithmic “top” charts, as far as podcasters have been able to piece together, are based primarily (or solely) on the rate of new subscriptions to a podcast in Apple Podcasts for iOS and iTunes for Mac.
Subscriptions happening in other apps have no effect on Apple’s promotional charts because, as long as this remains decentralized and open, Apple has no way of knowing about them.
Apple’s Podcasts app for iOS is fine, but not great, leaving the door wide open for better apps like mine. (Seriously, it’s much better, and it’s free. Trying to succeed in the App Store in 2016 is neither the time nor the place for modesty.)
Apple’s app has only a few integrations and privileges that third-party apps can’t match, and they’re of ever-decreasing relevance. They haven’t locked down the player market at all.
Ignoring for the moment that “podcasters” in news articles usually means “a handful of the largest producers, a friend or two of the reporter, and a press release from The Midroll, who collectively believe they represent all podcasters, despite only being the mass-market tip of the iceberg, as if CBS represented all of television or Business Insider represented all of blogging,” and this article is no exception, what these podcasters are asking for is the same tool web publishers have used and abused to death over the last decade to systematically ruin web content nearly everywhere:
Podcasts are just MP3s. Podcast players are just MP3 players, not platforms to execute arbitrary code from publishers. Publishers can see which IP addresses are downloading the MP3s, which can give them a rough idea of audience size, their approximate locations, and which apps they use. That’s about it.
They can’t know exactly who you are, whether you searched for a new refrigerator yesterday, whether you listened to the ads in their podcasts, or even whether you listened to it at all after downloading it.3
Big publishers think this is barbaric. I think it’s beautiful.
Big publishers think this is holding back the medium. I think it protects the medium.
And if that ill-informed New York Times article is correct in broad strokes, which is a big “if” given how much it got wrong about Apple’s role in podcasting, big podcasters want Apple to add more behavioral data and creepy tracking to the Apple Podcasts app, then share the data with them. I wouldn’t hold my breath on that.
By the way, while I often get pitched on garbage podcast-listening-behavioral-data integrations, I’m never adding such tracking to Overcast. Never. The biggest reason I made a free, mass-market podcast app was so I could take stands like this.
Big podcasters also apparently want Apple to insert itself as a financial intermediary to allow payment for podcasts within Apple’s app. We’ve seen how that goes. Trust me, podcasters, you don’t want that.
It would not only add rules, restrictions, delays, and big commissions, but it would increase Apple’s dominant role in podcasts, push out diversity, give Apple far more control than before, and potentially destroy one of the web’s last open media ecosystems.
Podcasting has been growing steadily for over a decade and extends far beyond the top handful of public-radio shows. Their needs are not everyone’s needs, they don’t represent everyone, and many podcasters would not consider their goals an “advancement” of the medium.
Apple has only ever used its dominant position benevolently and benignly so far, and as the medium has diversified, Apple’s role has shrunk. The last thing podcasters need is for Apple to increase its role and dominance.
And the last thing we all need is for the “data” economy to destroy another medium.
Companies running completely proprietary podcast platforms so far, trying to lock it down for themselves: Stitcher, TuneIn, Spotify, Google. (I haven’t checked in a while: has everyone finally stopped believing Google gives a damn about being “open”?) ↩
Beyond prohibiting pornographic podcasts in their directory and loosely encouraging publishers to properly use the “Explicit” tag. ↩
Unless you listen with the podcast publisher’s own app, in which case they can be just as creepy as on the web, if not more so. But as long as the open, RSS-based ecosystem of podcast players remains dominant, including Apple Podcasts, virtually nobody can afford to lock down their podcasts to only be playable from their own app. ↩