Jump to content

Torrent search results listings directly in uTorrent?


Dan

Recommended Posts

Hello,

I think it would be a neat feature to offer torrent search results listings directly in uTorrent. For instance, if you enter a search query in the top-right box, when you press enter a window will pop-up (no browser, simply a child window of uTorrent). In this pop-up window you will see a list of all search results from various websites, with the ability to sort by size, seeders, etc. And when you click on a result, a browser window will pop-up taking you to the website which returned the result and show the details of that torrent.

An easy integration of this can be achieved by using the back-end at http://www.xmltorrents.com. XMLTorrents offers a dynamic website listing, meaning that one can fetch a list in real-time of all available websites to search on. Thus should a feature like this appear in uTorrent, it would update itself (so you will not need to wait on new versions of uTorrent). You can read more features on the website.

I would say XMLTorrents is fairly easy to integrate as all it requires is some simple programming skills and knowledge of how to program a XML-parser. I am positive that the developers of uTorrent will have no problems with this should the decide to integrate it.

In case you wonder, yes I am the creator of the website and application. Please do not think of this post as pure advertising, I am just trying to bring something back to "the scene" by offering this free service. I would be very happy to see a feature like this in uTorrent, even if it did not to use the XMLTorrents back-end.

Regards,

Dan Nilsson

Link to comment
Share on other sites

Indeed something like this has been requested before, for sure, but the problem is that each torrent indexer/tracker site is different, and returns their data differently. If there were a more standardized output, then I'm sure ludde would've capitalized on it already. Alas, that's not the case, so there is no way to do this for every site.

Link to comment
Share on other sites

It'd be best to avoid relying on a single point of failure, but that wouldn't be bad in the meantime. This is actually easier to implement than it seems. You can come up with an XML-based format to describe how a given site displays its data. It would be implementing a specification similar to what some simple code in JavaScript DOM, Python Beautiful Soup or something similar would implement. I'm guessing that something like this may already exist...some XSLT thing to transform HTML? Not sure, but rolling your own wouldn't be hard if, after checking for pre-existing stuff, it didn't already exist.

Link to comment
Share on other sites

I think njyoder meant that it's best not to rely on the single point of failure to decide what gets implemented or not. In this case, the fact that each site delivers results in different formats is the point of failure, and deciding not to implement it just because of that single reason would be a bad reason.

I guess having one *popular* application support a standardized format would mean other people might start following suit, but in this case, I don't see it happening, as the webmasters of those search index sites make money off of the advertisements there, and if no one visits it directly, no one sees the advertisement, and if no one sees the advertisement, the advertiser won't want to advertise at the site anymore. Meaning no money for the webmaster, and therefore, it's a bad idea for them overall if they plan to keep making money off the site (be it for profit, or for maintaining server costs). AKA it *probably* won't be implemented by a lot of sites anyway. But that's just my guess, who knows...

Link to comment
Share on other sites

I was referring to utilizing a single website to deliver you the special XML-search-format files. We don't know how reliable this "xmltorrents.com" website will be in the future, nor after it becomes standard feature to poll in many clients. Having a feature which allows people to download a simple XML file (that describes how a given site lays out its search results--it describes "screen scraping" te chnique for that given site--http://en.wikipedia.org/wiki/Screen_scraping) for each individual torrent tracker/search site. This way, none of the code needs to be changed if sites get added, removed or have their format altered. If a site does, then someone just makes simple alterations to a file and utorrent users put that in their search directory.

The method of action would be this XML format that decsribes actions similar in the way that javascript operates with DOM: http://www.howtocreate.co.uk/tutorials/javascript/dombasics . There are various libraries that make this easy in different language. There is actually a relatively new website (named something like "site scraper") that lets an end user manually select the DOM elements of a site to scrape and produces an XML file for them.

EDIT:

Here's the website I was referring to: http://www.dappit.com/ You can try it yourself. It seems to have periodic downtime, but it's good enough to give people not familiar with screen scraping what I'm talking about. If you google screen scraping (added with library/api/whatever), you'll get an idea of how prevalent this is--there really are a million libraries to make this easy. I may even check out the torrent websites later myself to see how easy they are to scrape and write some software to demonstrate some sort of XML description format for scraping them. Of course, when I say "I'll get to it later", it means that "I'm lazy and I'lll do it in a month or 6."

Link to comment
Share on other sites

"I guess having one *popular* application support a standardized format would mean other people might start following suit, but in this case, I don't see it happening, as the webmasters of those search index sites make money off of the advertisements there, and if no one visits it directly, no one sees the advertisement....."

That is true and I have also thought about that. In my case with XMLTorrents I have made the URLs point to the details page of the torrent instead of the .torrent file itself. This forces the users to view at least some advertisements from the origin website.

njyoder, I understand your concern with relying on a single website to deliver these XML-formatted results to all clients, and perhaps you are right, one does not know how this website will be in the future.

My intensions with suggesting XMLTorrents were to make things as simple as possible for the developers of uTorrent. I am certain that they are competent enough to implement a thing like this on their own, possible using the method you suggested, but perhaps they do not have the time to. By using a service like XMLTorrents not much is needed from them to make this happen.

Dan

Link to comment
Share on other sites

In that case, forcing them to visit the torrent details page is not much less than having them view the search results (which is already done), and then click on the link (which is then the same page).

I like the idea and all, and have thought about it as well, but I just don't see something like it happening =T

Link to comment
Share on other sites

The main benefit that this would have is that you would be able to tell what you have already downloaded in your library as with Limewire. Maybe if you gave a standardized way to deliver the advertising to the browser window you could get the websites on board with it, they want torrents to be accesible and popular just like all of us. Utorrent could become just like Limewire, but ad supported. This path is very dangerous.. right now utorrent <and other torrent clients> are just glorified telnet clients. If you add search engine capability, where results are integrated with the program, it becomes a "rich media content delivery system". Sounds so much more like a target for lawsuits doesn't it? The way it is now, at least if a search website goes down another can go up. And they are motivated to keep putting them up, cause it makes them money. Ok I'm done rambling lol what do you guys think..

Link to comment
Share on other sites

I don't see anything like this setting anyone up for lawsuits. It'd only be gathering information from the internet the same way browsers do... As for displaying advertisements inside... people are definitely going to start calling µTorrent "adware" or LSJADfklahdhfoa-ware -- there are a lot of haters just waiting for something new to bash on it. I guess that shouldn't be a deciding factor, but even still... =T

(And you probably mean "torrent" clients, not "telnet" :P)

Link to comment
Share on other sites

I'm aware of TorrentCascade, and indeed it does what I'm suggesting (I have developed two applications like this myself, including XMLTorrents and Dynamic Torrent Searcher). However, my suggestion is actually to integrate this into uTorrent, thus eliminating the need for any external application.

Dan

Link to comment
Share on other sites

TorrentCascade looks like it does what I suggested, I'll have to check out what it the client downloads to 'understand' each website. When a website changes, the update could be written in less than an hour (assuming someone gets to it that quickly). I don't see a problem in some very small downtime in a search feature for a specific site. The only p roblem is is if the torrent search sites start doing nasty tricks to specifically avoid this, like using javascript to actually write out the unobscured results.

Link to comment
Share on other sites

Hi Dan, nice app you got there. Sorry, I only read the thread title to answer the question to introduce you a torrent search until the feature is implemented in µTorrent. :D. Just checked out XMLTorrent and it is very nicely designed and certainly better than TorrentCascade. Thanks.

Link to comment
Share on other sites

now, making all sites use the same way of showning the search results sarely can't hapen, but...

i recently found this: http://qbittorrent.sourceforge.net/images/stories/screenshots/search.png

they just made a parser for some of the biggest sites. this is not sth that hard to do and i think it won't hurt having a simmilar thing in uT

the problem is that this is actualy delivered in a form of a plugin. This is the best way to do it, cause it makes for easier adjustment if one of the sites decides to change sth (no need to make a new build of the whole program), but we know what most ppl (including me) think of adding pluggins to uT...

Link to comment
Share on other sites

"All that work," but it's not a lot of work. Even if they use JavaScript, it's doubtful they'd do anything sophisticated, which would be required to really thwart such a system.

EDIT: I should add, that in the worst case scenario, you could use Gecko/Spidermonkey. The websites must render output correctly to users and as such, you can capture the output using those engines. The only imaginable way to get around this that I can think of (aside from requiring special software)--and this would make it a such a pain in the ass to users that I doubt they'd do it--is to force the user to use a captcha for each, individual search. Honestly, I don't tihnk they'd even go far enough to use anything sophisticated anyway, so this is just me being paranoid and considering every conceivable possibility.

Link to comment
Share on other sites

"the problem is that this is actualy delivered in a form of a plugin. This is the best way to do it...."

Actually, the best way to do it would be a gateway, like XMLTorrents or where users can temporarily download a file which contains information on how to parse results from a specific website. This way no external files are needed.

As for the Javascript issue, I doubt they will use such methods but if they do (and there is no way to circumvent it) one can always just exclude that website from the search. There will always be other websites to search on (hopefully).

Link to comment
Share on other sites

>Actually, the best way to do it would be a gateway, like XMLTorrents or where users can temporarily download a file which contains information on how to parse results from a specific website. This way no external files are needed.

Two problems:

1. As I explained, it's a single point of failure--you have to rely on one website.

2. It will need to, for the purposes of effeciency, cache the information from the websites (so it doesn't repeatedly poll it for every search). In other words, you effectively have external files.

Link to comment
Share on other sites

  • 9 months later...

Archived

This topic is now archived and is closed to further replies.

×
×
  • Create New...