945 stories
·
3 followers

Don’t Write Copyright Law in Secret

1 Share

We're taking part in Copyright Week, a series of actions and discussions supporting key principles that should guide copyright policy. Every day this week, various groups are taking on different elements of copyright law and policy, addressing what's at stake and what we need to do to make sure that copyright promotes creativity and innovation.

The United States is the world’s chief exporter of copyright law. With recent news that President Trump is expected to sign the US Mexico-Canada (USMCA) trade agreement next week, we’re one step closer to Canada being forced to align with the US copyright duration to life of the author plus 70 years, keeping important works from being able to enter the public domain for another 20 years.

The USMCA requires participating countries to have a copyright term of at least 70 years. In practice, this measure will affect only Canada: in the United States, copyright lasts 70 years already, and in Mexico it’s even longer (100 years). Only Canada has stuck with the 50-year minimum required by the Berne Convention.

It’s a common story: again and again, trade agreements bring longer copyright terms to participating countries under the banner of standardization, often with the United States. But that “standardization” only takes place in one direction, toward more restrictive copyright laws. The failed Trans-Pacific Partnership Agreement (TPP) would have lengthened copyright terms for several participating countries. It also would have brought US copyright’s protection for digital locks to participating countries.

The USMCA is just the latest example: when copyright terms are negotiated in private, multinational agreements, it tends to favor the interests of large media companies. Countries should decide their own copyright laws by inclusive, democratic processes, not through secret negotiations.

Those copyright law expansions bring real threats to human rights in the countries where the United States exports them. In 2011, Colombian graduate student Diego Gomez shared another student’s Master’s thesis with colleagues over the Internet, sparking a six-year legal battle that could have put him in prison for years.

While Diego’s story has become a rallying cry for advocacy for open access to research, it’s important for another reason too. It shows the dangerous consequences of copyright-expanding trade agreements. The law Diego was tried under had a sentencing requirement that lawmakers passed in order to comply with a trade agreement with the U.S.

Trade agreements that expand copyright almost never carry requirements that participating nations honor limitations on copyright like fair use or fair dealing, leaving many countries with strong protection for large rights-holders and weak protection for their citizens’ rights.

Copyright should not be a global monolith. Differences between countries’ copyright laws are a feature, not a bug. In implementing copyright law, lawmakers should carefully balance the rights of copyright holders with the rights of the public to use and build upon copyrighted works. Lawmakers can’t make that balance when their trade negotiators have already given the public’s rights away.

 



Read the whole story
herrmann
1 day ago
reply
Brazil
Share this story
Delete

alt.interoperability.adversarial

1 Share

Today, we are told that the bigness of Big Tech giants was inevitable: the result of "network effects." For example, once everyone you want to talk to is on Facebook, you can't be convinced to use another, superior service, because all the people you'd use that service to talk to are still on Facebook. And of course, those people also can't leave Facebook, because you're still there.

But network effects were once a double-edge sword, one that could be wielded both by yesterday's Goliaths and today's Davids. Once, network effects made companies vulnerable, just as much as they protected them.

The early, pre-graphic days of the Internet were dominated by Usenet, a decentralized, topic-based discussion-board system that ran on UUCP -- AT&T's Unix-to-Unix Copy utility -- that allowed administrators of corporate servers to arrange for their computers to dial into other organizations' computers and exchange stored messages with them, and to pass on messages that were destined for more distant systems. Though UUCP was originally designed for person-to-person messaging and limited file transfers, the administrators of the world's largest computer systems wanted a more freewheeling, sociable system, and so Usenet was born.

Usenet systems dialed each other up to exchange messages, using slow modems and commercial phone lines. Even with the clever distribution system built into Usenet (which allowed for one node to receive long-distance messages for its closest neighbors and then pass the messages on at local calling rates), and even with careful call scheduling to chase the lowest long-distance rates in the dead of night, Usenet was still responsible for racking up some prodigious phone bills for the corporations who were (mostly unwittingly) hosting it.

The very largest Usenet nodes were hosted by companies so big that their Usenet-related long distance charges were lost in the dictionary-sized bills the company generated every month (some key nodes were operated by network administrators who worked for phone companies where long-distance calls were free).

The administrators of these key nodes semi-jokingly called themselves "the backbone cabal" and they saw themselves as having a kind of civic duty to Usenet, part of which was ensuring that their bosses never got wind of it and (especially) that Usenet never created the kind of scandal that would lead to public outcry that would threaten the project.

Which is why the backbone cabal was adamant that certain discussion forums be suppressed. Thanks to a convention proposed by EFF co-founder John Gilmore, there was a formal process for creating a Usenet newsgroup, requiring that a certain number of positive votes be cast for the group's creation by Usenet's users, and that this positive force not be checked by too many negative votes. Though this compromise stacked the deck against controversy by allowing a critical mass of objectors to block even very popular proposals, some proposed controversial newsgroups made it through the vote.

When that happened, the backbone cabal response was to "protect Usenet from its own users," by refusing to carry these controversial newsgroups on their long-haul lines, meaning that all the local systems (who depended on on the backbone to serve up UUCP feeds without long-distance fees) would not be able to see them. It was a kind of network administrator's veto.

Usenet users chafed at the veto. Some of the "controversial" subjects the cabal blocked (like recreational drugs) were perfectly legitimate subjects of inquiry; in other cases (rec.gourmand -- a proposal for a group about cooking inside the "recreation" category, rather than the "talk" category), the cabal's decision was hard to see as anything but capricious and arbitrary.

In response, John Gilmore, Gordon Moffett and Brian Reid created a new top-level category in the Usenet hierarchy: alt., and in 1987, the first alt. newsgroup was formed: alt.gourmand.

The backbone did not carry the alt. hierarchy, but that wasn't the end of things. Gilmore was willing to subsidize the distribution of the alt. hierarchy, and he let it be known that he would pay the long distance charges to have his UUCP server dial up to distant systems and give them an alt. feed. Because UUCP allowed for the consolidation of feeds from multiple sources, Usenet users could get their regular Usenet feeds from the backbone cabal, and their alt. feeds from Gilmore; as time went by and new services like Telenet provided new ways of bridging systems that were cheaper than long-distance modem calls, and as the modems themselves got faster, and an Internet protocol for Usenet messages called NNTP was created and the alt. hierarchy became the most popular part of Usenet.

The crisis that the backbone cabal had feared never materialized. The alt. hierarchy's freewheeling rules -- that let anyone add any newsgroup without permission from third parties -- came to dominate the Internet, from the Web (anyone can add a website) to its many services (anyone can add a hashtag or create a social media group).

The story of the alt. hierarchy is an important lesson about the nearly forgotten art of "adversarial interoperability," in which new services can be plugged into existing ones, without permission or cooperation from the operators of the dominant service.

Today, we're told that Facebook will dominate forever because everyone you want to talk to is already there. But that was true of the backbone cabal's alt.-free version of Usenet, which controlled approximately one hundred percent of the socializing on the nascent Internet. Luckily, the alt. hierarchy was created before Facebook distorted the Computer Fraud and Abuse Act to try to criminalize terms of service violations. Usenet had no terms of service and no contracts. There were only community standards and mores, endlessly discussed. It was created in an era when software patents were rare and narrow, before the US Patent and Trademark Office started allowing patents on anything so long as you put "with a computer" in the application – a few years later, and Usenet creators might have tried to use Duke University and UNC’s patent portfolio to try to shut down anyone who plugged something as weird, dangerous and amazing as alt. into the Usenet (wags insisted that alt. didn't stand for "alternative," but rather, "Anarchists, Lunatics, and Terrorists"). As alt. grew, its spread demanded that Usenet's software be re-implemented for non-Unix computers, which was possible because software interfaces were not understood to be copyrightable – but today, Oracle is seeking to have the courts seal off that escape hatch for adversarial interoperability.

Deprived of these shields against adversarial interoperability, Usenet's network effects were used against it. Despite being dominated by the backbone cabal, Usenet had everything the alt. hierarchy needed to thrive: the world's total population of people interested in using the Internet to socialize; that meant that the creators of alt. could invite all Usenet users and to expand their reading beyond the groups that met with the cabal's approval without having to get the cabal's permission. Thanks to the underlying design of Usenet, the new alt. groups and the incumbent Usenet newsgroups could be seamlessly merged into a system that acted like a single service for its users.

If adversarial interoperability still enjoyed its alt.-era legal respectability, then Facebook alternatives like Diaspora could use their users' logins and passwords to fetch the Facebook messages the service had queued up for them and allow those users to reply to them from Diaspora, without being spied on by Facebook. Mastodon users could read and post to Twitter without touching Twitter's servers. Hundreds or thousands of services could spring up that allowed users different options to block harassment and bubble up interesting contributions from other users -- both those on the incumbent social media services, and the users of these new upstarts. It's true that unlike Usenet, Facebook and Twitter have taken steps to block this kind of federation, so perhaps the experience won't be as seamless as it was for alt. users mixing their feeds in with the backbone's feeds, but the main hurdle – moving to a new service without having to convince everyone to come with you – could be vanquished.

In the absence of adversarial interoperability, we're left trying to solve the impossible collective action problem of getting everyone to switch at once, or to maintain many different accounts that reach many different groups of potential users.

Regulators are increasingly bullish on interoperability and have made noises about creating standards that let one service plug into another one. But as important as these standards are, they should be the floor on interoperability, not the ceiling. Standards created with input from the tech giants will always have limits designed to protect them from being disrupted out of existence, the way they disrupted the market leaders when they were pipsqueak upstarts.

Restoring adversarial interoperability will allow future companies, co-operatives and tinkerers to go beyond the comfort zones of the winners of the previous rounds of the game -- so that it ceases to be a winner-take-all affair, and instead becomes the kind of dynamic place where a backbone cabal can have total control one year, and be sidelined the next.



Read the whole story
herrmann
74 days ago
reply
Brazil
Share this story
Delete

Introducing Datashades.info, a CKAN Community Service

1 Share

Do you use CKAN to power an open data portal? In this guest post Link Digital explains how you can take advantage of their latest open data initiative Datashades.info.

Datashades.info is a tool designed to deliver insights for researchers, portal managers, and the wider tech community to inform and support open data efforts relating to data hosted on CKAN platforms.

Link Digital created the online service through a number of alpha releases and considers datashades.info, now in beta, as a long term initiative they expect to improve with more features in future releases.

Specifically, Datashades.info provides a publicly-accessible index of metadata and statistics on CKAN data portals across the globe. For each portal, a number of statistics are aggregated and presented surrounding number of datasets, users, organisations and dataset tags. These statistics give portal managers the ability to quickly compare the size and scope of CKAN data portals to help inform their development roadmaps. Moreover, for each portal, installed plugin information is collected along with the relative penetration of those plugins across all portals in the index. This will enable CKAN developers to quickly see what extensions are the most popular and on what portals they are being used. Finally, all historical data is persisted and kept publically accessible, allowing researchers to analyse historical data trends in any indexed CKAN portal.

Datashades.info was built to support a crowd-sourced indexing scheme. If a visitor searches for a CKAN portal and it is not found within the index, the system will immediately query that portal and attempt to generate a new index entry on-the-fly. Aggregation of a new portal’s statistics into Datashades.info also happens automatically.

Maximise the tool and gain interesting information with the following features:

Globally Accessible open data

With Datashades.info, you can easily access an index of metadata and statistics on CKAN data portals across the globe. To do this, simply type in the portal’s URL on the homepage then click “Search“.

Integrated Values of All Metrics

After entering a portal’s URL, Datashades.info will load its information. After a few seconds, you will be able to see a range of data on portal users, datasets, resources, organisations, tags and plugins. Portal managers can access these via the individual portal page found on the site.

Easily-tracked Historical Data

Want to revisit data you previously explored? The tool also keeps old data in a historical index which users can explore any time on any portal page or by clicking “View All Data Portals” on the homepage.

Crowdsourcing

Datashades.info uses crowdsourcing to build its index. This means users can easily add any CKAN data portal not found on the site. To do this, simply search for a portal you know and it’ll be automatically added to the site and global statistics.

As the project remains at a beta level of maturity, it is still wanting of improvements in many areas. But with the continuous feedback coming from the CKAN community, expect that more data and features will be added in future releases. For now, have a look around and stay tuned!

 

Read the whole story
herrmann
120 days ago
reply
Brazil
Share this story
Delete

What Do you Mean You Write Code EVERY DAY?

1 Share

Every so often, I ask folk in the department when they last wrote any code; often, I get blank stares back. Write code? Why would they want to do that? Code is for the teaching of, and big software engineering projects, and, and, not using it every day, surely?

I disagree.

I see code as a tool for making tools, often disposable ones.

Here’s an example…

I’m writing a blog post, and I want to list the file types recognised by Jupytext. I can’t find a list of the filetypes it recognises as a simple string that I can copy and paste into the post, but I do find this:

Copying out those suffixes is a pain, so I just copy that text string, which in this case happens to play nicely with Python (because it is Python), sprinkle a bit of code:

and here’s the list of filetypes supported by Jupytext: .py, .R, .r, .jl, .cpp, .ss, .clj, .scm, .sh, .q, .m, .pro, .js, .ts, .scala.

Note that is doesn’t have to be nice code, and there may be multiple ways of solving the problem (in the example, I use a hybrid “me + the computer” approach where I get the code to do one thing, I copy the output, paste that into the next cell and then hack code around that, as well as “just the computer” approach. The first one is perhaps more available to a novice, the second to someone who knows about .join()).

So what?

I tend use code without thinking anything special of it; it’s just a tool that’s to hand to fashion other tools from, and I think that colours my attitude towards the way in which we teach it.

First and foremost, if you come out of a coding course not thinking that you now have a skill you can use quite casually to help get stuff done, you’ve been mis-sold…

This blog post took much longer to write than it took me to copy the _SCRIPT_EXTENSIONS text and write the code to extract the list of suffixes… And it didn’t take long to write the post at all…

See also: Fragment – Programming Privilege.



Read the whole story
herrmann
156 days ago
reply
Brazil
Share this story
Delete

Exploring Jupytext – Creating Simple Python Modules Via a Notebook UI

1 Share

Although I spend a lot of my coding time in Jupyter notebooks, there are several practical problems associated with working in that environment.

One problem is that under version control, it can be hard to tell what’s changed. On the one hand, the notebook .ipynb format, which saves as a serialised JSON object, is hard to read cleanly:

The .ipynb format also records changes to cell execution state, including cell execution count numbers and changes to cell outputs (which may take the form of large encoded strings when a cell output is an image, or chart, for example:

Another issue arises when trying to write modules in a notebook that can be loaded into other notebooks.

One workaround for this is to use the notebook loading hack described in the official docs: Importing notebooks. This requires loading in a notebook loading module that then allows you to import other modules. Once the notebook loader module is installed, you can run things like:

  • import mycode as mc to load mycode.ipynb
  • `moc = __import__(“My Other Code”)` to load code in from `My Other Code.ipynb`

If you want to include code that can run in the notebook, but that is not executed when the notebook is loaded as a module, you can guard items in the notebook:

In this case, the if __name__=='__main__': guard will run the code in the code cell when run in the notebook UI, but will not run it when the notebook is loaded as a module.

Guarding code can get very messy very quickly, so is there an easier way?

And is there an easier way of using notebooks more generally as an environment for creating code+documentation files that better meet the needs of a variety of users? For example, I note this quote from Daniele Procida recently shared by Simon Willison:

Documentation needs to include and be structured around its four different functions: tutorials, how-to guides, explanation and technical reference. Each of them requires a distinct mode of writing. People working with software need these four different kinds of documentation at different times, in different circumstances—so software usually needs them all.

This suggests a range of different documentation styles for different purposes, although I wonder if that is strictly necessary?

When I am hacking code together, I find that I start out by writing things a line at a time, checking the output for each line, then grouping lines in a single cell and checking the output, then wrapping things in a function (for example of this in practice, see Programming in Jupyter Notebooks, via the Heavy Metal Umlaut). I also try to write markdown notes that set up what I intend to do (and why) in the following code cells. This means my development notebooks tell a story (of a sort) of the development of the functions that hopefully do what I actually want them to by the end of the notebook.

If truth be told, the notebooks often end up as an unholy mess, particularly if they are full of guard statements that try to separate out development and testing code from useful code blocks that I might want to import elsewhere.

Although I’ve been watching it for months, I’ve only started exploring how to use Jupytext in practice quite recently, and already it’s starting to change how I use notebooks.

If you install jupytext, you will find that if you click on a link to a markdown (.md)) or Python (.py), or a whole range of other text document types (.py, .R, .r, .Rmd, .jl, .cpp, .ss, .clj, .scm, .sh, .q, .m, .pro, .js, .ts, .scala), you will open the file in a notebook environment.

You can also open the file as a .py file, from the notebook listing menu by selecting the notebook:

and then using the Edit button to open it:

at which point you are presented with the “normal” text file editor:

One thing to note about the notebook editor view over the notebook is that you can also include markdown cells, as you might in any other notebook, and run code cells to preview their output inline within the notebook view.

However, whilst the markdown code will be saved into the Python file (as commented out code), the code outputs will not be saved into the Python file.

If you do want to be able to save notebook views with any associated code output, you can configure Jupytext to “pair” .py and .ipynb files (and other combinations, such as .py, .ipynb and .md files) such that when you save an open .py or .ipynb file from the notebook editing environment, a “paired” .ipynb or .py version of the file is also saved at the same time.

This means I could click to open my .py file in the notebook UI, run it, then when I save it, a “simple” .py file containing just code and commented out markdown is saved along with a notebook .ipynb file that also contains the code cell outputs.

You can configure Jupytext so that the pairing only works in particular directories. I’ve started trying to explore various settings in the branches of this repo: ouseful-template-repos/jupytext-md. You can also convert files on the command line; for example, <span class="s1">jupytext --to py Required\ Pace.ipynb will convert a notebook file to a python file.

The ability to edit Python / .py files, or code containing markdown / .md files in a notebook UI, is really handy, but there’s more…

Remember the guards?

If I tag a code cell using the notebook UI (from the notebook View menu, select Cell Toolbar and then Tags, you can tag a cell with a tag of the form active-ipynb:

See the Jupytext docs: importing Jupyter notebooks as modules for more…

The tags are saved as metadata in all document types. For example, in an .md version of the notebook, the metadata is passed in an attribute-value pair when defining the language type of a code block:

In a .py version of the notebook, however, the tagged code cell is not rendered as a code cell, it is commented out:

What this means is that I can tag cells in the notebook editor to include them — or not — as executable code in particular document types.

For example, if I pair .ipynb and .py files, whenever I edit either an .ipynb or .py file in the notebook UI, it also gets saved as the paired document type. Within the notebook UI, I can execute all the code cells, but through using tagged cells, I can define some cells as executable in one saved document type (.ipynb for example) but not in another (a .py file, perhaps).

What that in turn means is that when I am hacking around with the document in the notebook UI I can create documents that include all manner of scraggy developmental test code, but only save certain cells as executable code into the associated .py module file.

The module workflow is now:

  • install Jupytext;
  • edit Python files in a notebook environment;
  • run all cells when running in the notebook UI;
  • mark development code as active-ipynb, which is to say, it is *not active* in a .py file;
  • load the .py file in as a module into other modules or notebooks but leaving out the commented out the development code; if I use `%load_ext autoreload` and `%autoreload 2` magic in the document that’s loading the modules, it will [automatically reload them](https://stackoverflow.com/a/5399339/454773) when I call functions imported from them if I’ve made changes to the associated module file;
  • optionally pair the .py file with an .ipynb file, in which case the .ipynb file will be saved: a) with *all* cells run; b) include cell outputs.

Referring back to Daniele Procida’s insights about documentation, this ability to have code in a single document (for example, a .py file) that is executable in one environment (the notebook editing / development environment, for example) but not another (when loaded as a .py module) means we can start to write richer source code files.

I also wonder if this provides us with a way of bundling test code as part of the code development narrative? (I don’t use tests so don’t really know how the workflow goes…)

More general is the insight that we can use Jupytext to automatically generate distinct versions of a document from a single source document. The generated documents:

  • can include code outputs;
  • can *exclude* code outputs;
  • can have tagged code commented out in some document formats and not others.

I’m not sure if we can also use it in combination with other notebook extensions to hide particular cells, for example, when viewing documents in the notebook editor or generating export document formats from an executed notebook form of it. A good example to try out might be the hide_code extension, which provides a range of toolbar options that can be used to customise the display of a document in a the notebook editor or HTML / PDF documents generated from it.

It could also be useful to have a very simple extension that lets you click a toolbar button to set an active- state tag and style or highlight that cell in the notebook UI to mark it out as having limited execution status. A simple fork of, or extension to, the freeze extension would probably do that. (I note that Jupytext responds to the “frozen” freeze setting but that presumably locks out executing the cell in the notebook UI too?)

PS a few weeks ago, Jupytext creator Marc Wouts posted this handy recipe for *rewriting* notebook commits made to a git branch against markdown formatted documents rather than the original ipynb change commits: git filter-branch --tree-filter 'jupytext --to md */*.ipynb && rm -f */*.ipynb' HEAD This means that if you have a legacy project with commits made to notebook files, you can rewrite it as a series of changes made to markdown or Python document versions of the notebooks…



Read the whole story
herrmann
156 days ago
reply
Brazil
Share this story
Delete

Manhattan DA Made Google Give Up Information on Everyone in Area as They Hunted for Antifa

2 Comments and 10 Shares

When Gavin McInnes—founder of the violent, far-right group The Proud Boys—spoke to a Manhattan Republican club last October, the neighborhood response was less than welcoming. Protesters took to the normally sedate Upper East Side block with chants and spray paint. The Proud Boys responded with fists and kicks. Nearly a year later, as the assault and riot charges against four Proud Boys go to trial, prosecutors revealed that they had turned to an alarming new surveillance tool in this case: a reverse search warrant.

The Manhattan District Attorney's Office admitted it demanded Google hand over account information for all devices used in parts of the Upper East Side. They didn’t do this to find the Proud Boys; they did it to find Antifa members.

Reverse search warrants have been used in other parts of the country, but this is the first time one was disclosed in New York. Unlike a traditional warrant, where law enforcement officials request information on a specific phone or individual, reverse warrants allow law enforcement to target an entire neighborhood. Police and prosecutors create a “geofence”—a map area—and demand information on anyone standing in the zone. This flips the logic of search warrants on its head. Rather than telling service providers the name or phone number of a suspect, reverse search warrants start with the location and work backwards.

It’s a big change. Depending on the size and location of the geofence, a reverse search warrant can easily target hundreds or even thousands of bystanders. That scale is what makes reverse search warrants so enticing to law enforcement and so concerning to civil liberties groups. One concern is that the more broadly law enforcement uses surveillance, the higher the risk for “false discovery.” That’s a clinical way to say that the more people you spy on, the more innocent people will wrongly go to jail.

The phenomenon is well-documented in the sciences, where researchers have long known that “high false discovery rates occur when many outcomes of a single intervention are tested.” Essentially, when you look for too many patterns at the same time, you increase the danger that the data will fool you. When police officers request the data for hundreds or even thousands of devices, there’s a higher chance that they’ll wrongly think that one of those bystanders is a suspect.

This isn’t just theoretical. That’s what Jorge Molina discovered in 2018, when Arizona detectives wrongly arrested him for a brutal murder, jailing him for nearly a week before he was exonerated. Officers demanded that Google hand over information on every single laptop, phone, tablet, and smart device in a two-block area. We don’t know how many accounts that includes, but it’s no surprise that while sifting through that many devices that they quickly found a “match.” Only he was innocent.

In response to the Manhattan DA’s reverse search warrant, Google provided information that investigators used—along with images given to a private facial recognition company—to target two people who turned out to be innocent bystanders. Thankfully, unlike in Molina’s case, the two “matches” in Manhattan were never arrested—and the Antifa members have not been identified, even as several Proud Boys have stood trial.

But with the seal broken now in Manhattan, there are likely to be more geofence warrants and more false discoveries. While a judge needs to sign off on a reverse warrant, that formality provides little protection to the public. A traditional warrant application asks for information about the individual being targeted and the reasons they are suspected. With reverse warrants, judges don’t even know how many people’s data will be compromised. They simply don’t have enough information to do their job.

It’s also unclear how judges will evaluate reverse warrants around sensitive sites: political protests, houses of worship or medical facilities, among others. The practice is even more alarming when you consider the ways that ICE and other federal agencies could use a reverse warrant to pursue their deportation campaigns and target American immigrants.

None of this is to say that reverse search warrants are unique, they are just the latest example of how the surveillance capitalism that powers tech firms can become a tool for the government. Maybe some users who happily hand their data to the tech giants will second guess that choice when they realize how quickly their digital sidekicks can morph into a big brother.

Albert Fox Cahn is the executive director of The Surveillance Technology Oversight Project at the Urban Justice Center, a New York-based civil rights and privacy organization. On Twitter @cahnlawny. 

Read the whole story
herrmann
156 days ago
reply
Brazil
popular
160 days ago
reply
Share this story
Delete
2 public comments
awilchak
160 days ago
reply
this is the scary stuff
Brooklyn, New York
136 days ago
https://keramatzade.com/Earn-wealth-with-amazing-business-ideals https://keramatzade.com/Law-of-Attraction-of-Wealth https://keramatzade.com/Ways-to-make-money https://modirebimeh.ir/online-calculation-of-iranian-life-insurance/ https://modirebimeh.ir/engineers-professional-liability-insurance/ https://modirebimeh.ir/third-party-insurance-calculation/ https://modirebimeh.ir/iran-liability-insurance-have-you-not-yet-insured-your-business-with-iran-liability-insurance/ https://modirebimeh.ir/iran-life-insurance-ganji-for-the-future-of-children-and-families/ https://modirebimeh.ir/iran-car-body-insurance-the-best-and-most-prestigious-in-the-iranian-insurance-industry/ https://modirebimeh.ir/the-most-reliable-and-unrivaled-third-party-car-insurance-in-iran/ https://keramatzade.com/14-ways-to-increase-revenue https://keramatzade.com/8-ways-to-increase-revenue https://keramatzade.com/25-jobs-with-which-you-can-earn-up-to-a-million-dollars https://keramatzade.com/success-secret-1 https://keramatzade.com/Make-Money-Online-Effective-step-by-step-money-making-techniques https://keramatzade.com/Make-money-at-home https://keramatzade.com/Ways-to-make-money-without-capital https://keramatzade.com/Creative-Money-Making-Ideas https://keramatzade.com/The-law-of-attracting-money https://keramatzade.com/Ways-to-Make-Money-at-Home https://keramatzade.com/Immediate-absorption-of-wealth-in-10-minutes-and-attractive-ways-to-get-rich https://keramatzade.com/The-secret-of-attracting-money-in-Iran-to-achieve-creative-money-maker-ideas https://keramatzade.com/Ways-to-get-rich-in-Iran-with-the-most-wonderful-business-ideas https://keramatzade.com/Astonishing-economic-intelligence-test-to-increase-financial-intelligence
skorgu
160 days ago
reply
this_is_fine.jpg
Next Page of Stories