Botnet, a network of software robots, is typically associated with trojan or worm infected computers used to perform the bidding of their master — spam, malware, DDoS. And while the common application is grossly unethical and damaging, the academic study of a cluster of software nodes, working as a group, is fascinating.
Given the popularity of the IRC protocol for communication between the infected computers, I thought it might be an interesting thought experiment to consider other means of communication. On a locked down corporate network, blocking all but a few essential ports, HTTP is typically let through; I guess people need to access web pages for research and other business needs. Without setting up any of our own control servers, lets see what existing web services one could use to spawn a web of online mischief!
Twitter might prove to be an ideal service, as it is already meant for post-on-web social communication not just between humans, but bots also.
Twitter’s API makes it very easy for software to access and post all the vital information.
The service itself comes build-in with the concept of following specific accounts — allowing one to setup the network layout entirely within the webservice itself.
The functionality for replies, direct messages, and private profiles are just gravy.
Just like every social-network website, Reddit comes with a set of “friends”, and even private custom reddits that can be used as “channels” to communicate in.
Simple and clean HTML markup makes it easy to parse the contents of the page.
Though the zealous community is quick to point out any suspiciously spammy activity; And Proggit members will likely hijack any control in place.
The above ideas could be generalized to any Social Network website. GigPark’s feed of trusted recommendations is already filtered to 2 degrees of separation between the linked nodes, making filtering and discovery of new bots much easier. Friend-lists, messages, and favourite “recommendations” (which could function as a queue of tasks) rival Twitter’s toolset.
Actually GigPark’s innovative Suggested Friends feature will sync the “friends” from other social networks such as Twitter or Facebook, allowing for redundancy across multiple neworks.
Some downsides to this, obviously hypothetical, method involve the fact that too much reliance is placed into the host network. The webservice might be in an advantageous position to identify all the nodes; perhaps more so than an IRCop discovering the IRC channel where bots have gathered to communicate.
Another is the issue of information persistence. Web applications will typically keep the entire history of commands online. While privacy options that some social networks supply might hide some (or even all) of the activity from the public, some extra work needs to be done to hide the information from the host itself. Obfuscation, encoding, and the liberal use of “delete” options will scatter the data though the access logs, making it reasonably more difficult to trace the activity, rather than simply taking a snapshot of the database. Some bots don’t enjoy being studied by security researches, so they might be more exposed here.
Finally, going back to the issue of security and the corporate firewall — there will likely be a proxy server filtering access to certain websites. Some might be blocked because they distract employees (Facebook, MySpace, etc), others might match on some content. Though with a simple goal of communication, one just needs to find an online service that is trusted-enough to be widely accessible, and some means of getting it to display your supplied information.