It looks as if the media industry, represented by the World Association of Newspapers, is finally starting to recognize that in order to generate revenue on the Internet, they need to work with search engines, not against them. So they've come up with the brand new idea (how is that for subtle irony) of using machine readable metadata to regulate how search regine robots make use of content available online.
The new scheme is called ACAP (Automated Content Access Protocol), and according to a recent press release (not on the web) it will "provide an automated system, allowing search engine crawlers to recognise copyright content owners' access and use permissions' information, instead of merely allowing or forbidding them to search and index the content".
No technical details are known at the time of writing, but presumably ACAP will allow finer regulation of access rights than the existing opt-out scheme called robots.txt (Robots Exclusion Standard).