Google vs Belgium

DemoCoder

Veteran
Wha? No discussion. Apparently, Belgian law makes opt-out illegal (robots.txt and HTML meta tags) and therefore, indexing a site and displaying Title and summary paragraphs of news articles is illegal and violates copyright.

The Belgian news association wants search engines to use "opt in" instead, that is, they won't index a site unless it says it wants to be indexed. (of course, they also want to be paid to have their sites indexed. )

Needless to say, this would practically destroy search engines on the web if it was required.

The amazing thing is that these idiots don't realize that google indexing and hits on their site *drives* traffic to their site that they would not otherwise get, increasing the value of their site's ad inventory. People fight one another to get INTO Google and would never dream of asking Google to *PAY THEM* for the right to index their data, as the indexing pays for itself.

Google has been sued for NOT indexing people's sites throughly enough and ranking them correctly. Here they are being sued for automatically indexing them.

NEWS.GOOGLE.COM does not display full news articles. It displays a Title, one paragraph summary, and a link. It also doesn't display full pictures, only tiny thumbnails. What could be a more clear cut example of FAIR USE? I mean, will they sue bloggers who quote one paragraph and a title link? Of course not, this is nothing more than a Google shake down by an archaic and clueless MSM paper industry.

Google essentially wiped Belgian news sites off the map, removing them not only from news.google.com, but also from google.be search as well. I say, well deserved. Maybe after they realize the free traffic flow gravy train they were getting from Google stopped, they will stop asking for ridiculous payola for the right to be indexed.

The paper printing industries (books, news, etc) are running around like chickens with their heads cut off, taking absurdist positions on online indexing.

If people can't link without paying and can't quote even small snippets of text, just what value is there in hypermedia?

And how could Belgian courts be so stupid? Not only convicting Google without Google representing their side of the case, but also requiring Google to permanently post the court decision PDF on their front page or else face 500k euros fine per day! Who the fuck wants to have their bandwidth wasted by downloading a government court decree everytime they load up the front page?

I can't believe the insanity of the Belgian, and also French, courts in their rulings on fair use of news media clippings. All Google does is index, and provide title, summary, and in some cases, super-tiny thumbnail of AP photos. And they're alleging copyright infringement?!? They are the beneficially of such front page indexing, and they still want to shake down Google for more fees.

IMHO, Google should just announce they are blocking ANY site of ANY organization that attacks FAIR USE copyright provisions.

(BTW, one of the claims by the newscasters that Google was making money from news.google.com showing ads is blatantly false. news.google.com does not show advertising. it is purely a new aggregator)
 
The amazing thing is that these idiots don't realize that google indexing and hits on their site *drives* traffic to their site that they would not otherwise get, increasing the value of their site's ad inventory. People fight one another to get INTO Google and would never dream of asking Google to *PAY THEM* for the right to index their data, as the indexing pays for itself.
.....
Google essentially wiped Belgian news sites off the map, removing them not only from news.google.com, but also from google.be search as well. I say, well deserved. Maybe after they realize the free traffic flow gravy train they were getting from Google stopped, they will stop asking for ridiculous payola for the right to be indexed.)

So when they do realise this, how much do you think Google should charge to re-index them?:smile:
 
This is the first I hear about this, and it sounds immensely stupid and counterproductive to me. Oh well, it's their loss. Glad I'm not living in belgium...
 
As I understood the issue at its crux, it isn't the fact that Google is aggregating and using news from Belgian online outlets per se, it's that Google is then caching and making these pages available even after the actual news site has moved them to its 'archive', which it charges access to as part of a subscription model.

This makes for a very different argument, and one I'm honestly not sure where I stand on.
 
So then they put in their robots file not to have google index the site at all. They can't have it both way at once.

Suing google is just an evil scheme to try and wring money out of the company they have no right to.
 
The decision is published all over the place. Making the site's content available via cache was only one part of the case. The main thrust was against news.google.com. They are opposed to google linking, excerpting titles and summary paragraphs. I have an interview with Copiepresse about this. http://blog.searchenginewatch.com/blog/060920-152314

If they don't want Google to show excerpts or cache after the article expires, they just need to use an expires header + noarchive metatag. http://www.google.com/support/webmasters/bin/answer.py?answer=35306

This case bares several similarities to the Google Print and "Deep Linking" cases years ago. Companies that publish their articles online who aparently don't know, to quote Jon Stewart, "jack shit about the internet"

Why is it that the New York Times, which does the same thing, moving expired articles to paid premium parts of the site, has no problem dealing with search engines? The Web has an implicit social contract, and has had one since it began: if you publish content online and don't want it cached or indexed, you use commonly used mechanics that everyone in the community agreed upon, OR, you protect your site by blocking spidering (easy enough by requiring username/password signup to view articles for free), or blocking the User-Agent, etc.

The Belgian Courts are trying to change essentially internet common law or community etiquette because a bunch of clueless newspaper companies who arrived on the internet late don't realize the community rules. No one *forced* these newspapers to put their content online.

Imagine I started my own online service and made it clear in the FAQ that if you publish on it, you must "opt-out" of indexing or caching. Then some people arrive, publish their articles, and then sue me because they don't want to opt-out, but want opt-in to be the default, after I've got millions of people using the system successfully.

Imagine dumbasses who publish content on Wikipedia and then sue because someone edited it.

Any articles freely reachable from your domain root, not covered by robots.txt or meta tags, are indexed by search engines, and that has been the implicit agreement between publishers and search engines for years. There is no law that mandates this, but all legitimate search engines obey these rules. The internet has worked for years like this without needing the heavy handed and clueless intervention of judges.
 
Glad I'm not living in belgium...
I know ten times more reasons why I'm glad I don't live in your country.

Seriously man... It's sad to judge a whole country this way. And you clearly have never tasted Belgian beer and chocolate. :p
 
It does sound like this Belgium group are pretty clueless about the internet. Having said that, I think it's reasonable for them to ask Google not to spider their sites for news content if that's what they really want. I think, though, if that Google can show they have reasonable measures in place already for preventing this (such as the observance of meta-tags and robots files) then that should be good enough. It sounds to me that Belgium law has just failed to keep up with a modern medium and that some people have found a way of exploiting that. The law tends to be a slow and ponderous beast and often lags behind and I don't think Belgium is all that different to most countries in this respect.
 
Google not only obeys robots.txt and meta-tags, but you can also ask Google in writing not to spider your site and they will comply. A simple email to them usually gets something revoked, like an embarassing USENET post from the past.

The newspapers in Belgium probably are losing circulation like most newspapers in every country, to internet media, and are trying to grab cash from whomever they can.

Not to mention that like the RIAA/MPAA, Copiepresse takes the notion of copyright control way too far.
 
In the Netherlands, we have many jokes about Belgian people being stupid. And although we don't really mean it, things like this make you wonder...

Yes, it's really stupid, and I think most people around here don't understand it one bit. But I expect that many companies will sue the crap out of the companies who did this when they notice the hit they took, and that this ruling won't last long.
 
Back
Top