There’s a fun debate happening right this very second on WebmasterRadio.fm chat where people are arguing over whether or not the existence of an RSS feed is the same as implicitly granting permission to anyone and everyone to republish your work.
Personally, I’m on the fence on the morality of the issue. It’s wrong and bad to scrape, but I don’t see how you can prevent scrapers from taking your content and running it. You may as well acknowledge that the practice occurs and use it to your advantage by seeding your feed with some back links to your site(s). You can also limit your feed to just snippets of the full articles or even just headlines to make your content less desirable to scrapers.
To try to DMCA every site you catch reprinting your work is an exercise in futility, and you’d have a really tough time suing anyone over it unless:
- Your content is being reprinted/republished by a legitimate/large, money making website, they are claiming they wrote it, and you can prove monetary damages.
- You’re a bored attorney and have lots of time to kill.
- You have a relative who is a bored attorney and has lots of time to kill.
- You’ve got deep enough pockets and a deep enough hatred for a single, particular violator and want to go to war on principle, and you don’t care that you won’t actually win any money (because you can’t squeeze blood from a rock).
Really, unless situation #1 is applicable, it’s pointless to pursue.
Ultimately, I think your best bet is to put all the appropriate copyright claims and notices and restrictions on your site and in your TOS, and then take the proactive measure of sprinkling some back links into your content. Also, not providing the full articles in the feeds, but limiting the feed to just snippets or headlines might help, too.