Tag Archives: internet

‘Largest cyber attack ever’ is happening right now, threatens rest of web:

A cyber attack described as the largest in history is currently underway, and it’s apparently all because of an argument over some spam.

The Spamhaus Project, based in both London and Geneva, produces lists of email addresses and servers that are known to send out things that most people won’t want, from penis enlargement scams to malware and viruses. Its decisions are incredibly influential, and it seems as though someone isn’t too happy about being blocked, since right now, a vast cyber attack is directed right at Spamhaus, threatening the internet’s core infrastructure.

The distributed denial of service (DDoS) attacks are so large that, currently, they’re peaking at a reported 300gb/s (that’s three hundred gigabits a second) of data. For comparison, that’s roughly a sixth the practical functioning capacity of one of the major transatlantic cables, TAT-14. Most people are judging this to be the largest DDoS attack in the history of the internet. Spamhaus’s Vincent Hanna confirmed that this was the largest such attack aimed at Spamhaus so far, and confirmed that it could “certainly” affect internet traffic elsewhere.

He said: “Core internet infrastructure may get overwhelmed by the amount of traffic involved in an attack. When this happens other traffic may get impacted too. Compare it to a big highway: If the traffic jam gets big enough the onramps will slow down and fill up, and the roads to the onramps will fill up too.”

According to a blog on the site of web security company Cloudfare (we were directed to it by Hanna), the first attack happened on 18 March. It said: “The attack was large enough that the Spamhaus team wasn’t sure of its size when they contacted us. It was sufficiently large to fully saturate their connection to the rest of the internet and knock their site offline. These very large attacks, which are known as Layer 3 attacks, are difficult to stop with any on-premise solution. Put simply: if you have a router with a 10Gbps port, and someone sends you 11Gbps of traffic, it doesn’t matter what intelligent software you have to stop the attack because your network link is completely saturated.”

The attacks have been continuing since then, growing larger and larger in size. For most people, there’s one main suspect. Last month, Spamhaus added the servers of Cyberbunker to its spam lists. Cyberbunker is a server company based in a decommissioned Nato bunker in the Dutch town of Kloetinge. Outside of the bunker live dozens of rabbits; inside are servers which host everything “except child porn and anything related to terrorism”, according to its website.

The sheer quantiy of spam emanating from Cyberbunker’s servers (showing as the address “cb3rob.net”) led Spamhaus to block all of its traffic, a decision which infuriated many people. Cyberbunker has been linked with criminal gangs from Russia and other Eastern European nations, contributing to Spamhaus’s decision to block its traffic.

This isn’t the first large attack on Spamhaus — as you might expect, an organisation dedicated to stopping spam and scammers isn’t going to be popular with some shady people — but it is remarkable in it scale.

Hanna said: “Some people online claim that we are not accountable and can just ‘censor’ anything we want. This is obviously not the case. Not only do we have to operate within the boundaries of the law, we are also accountable to our users. If we started advising our users not to accept mail from certain places where they actually do want email from, they would be very quick to stop using our data because it’s obviously not working right for them.”

The attacks coincide with the launch of a new initiative by the British government to help businesses and law enforcement agencies better share information on cyber attacks, which has been rather optimistically likened to a “secure Facebook”. Cyber crimes units are currently looking into the Spamhaus attacks.

Original article – http://www.wired.co.uk/news/archive/2013-03/27/biggest-cyber-attack-spamhaus

Windows XP nearing it’s end?

Windows XP is ever nearing it’s life. With new technology advances Windows 7 is becoming the favoured operating system of choice.
The countdown currently sits at 711 days (while i write this) which is just under 2 years until Microsoft stop providing support for the operating system which has so far had a shelf life longer than expected since it’s inception in 2003
Support is already in some way at an end with support for SP2 already ceased back in July 2010
I have to admit, i’m not a big fan of XP’s look & i also believe that Windows XP is nothing more than an advanced version of Windows 2000 on which it is based which in itself is a progression of Windows NT4 in many ways.
For me, so far Windows 2000 has always been the most stable & down to earth OS that Microsoft ever created & as XP was based on the windows 2000 model, it’s stands to reason this is why Microsoft supported it for 3 years or so longer than they perhaps were originally planning & why XP will of had a shelflife for 10 years before it is retired.
I do think though perhaps there will have to come a low spec version of Windows 7 or a bespoke OS for lower spec machines, simply because there are a lof of older systems still in use & a technology boom came about from the launch of netbooks, which came off the back of the £100 laptop scheme piloted to make IT affordable to everyone, the thing is though a lot of netbooks are not capable of running newer operating systems such as Vista or Windows 7, simply because the hardware was a lot lower spec, with the focus on large memory & less powerful CPU’s to keep costs down so many netbooks use single core processors. Operating systems such as Ubuntu or the eagerly awaited Google OS could be in contention otherwise in the £100 IT market.

History of the Search Engine

Although we credit Google, Yahoo, and other major search engines for giving us the system we use to find the information we seek, the concept of hypertext came to life in 1945 when Vannaver Bush urged scientist to work together to help build a body of knowledge for all man kind. He then proposed the idea of a virtually limitless, fast, reliable, extensible, associative memory storage and retrieval system. He named this device a memex.

But there is a long list of great minds that have given us the information system we now use today. This article illustrates some of them. Here is the History of the Search Engine:

Ted Nelson
Ted Nelson created Project Xanadu in 1960 and coined the term hypertext in 1963. His goal with Project Xanadu was to create a computer network with a simple user interface that solved many social problems like attribution. While Ted’s project Xanadu, for reasons unknown, never really took off, much of the inspiration to create the WWW came from Ted’s work.

George Salton
George Salton was the father of modern search technology. He died in August of 1995. His teams at Harvard and Cornell developed the Saltons Magic Automatic Retriever of Text, otherwise known as the SMART informational retrieval system. It included important concepts like the vector space model, Inverse Document Frequency (IDF), Term Frequency (TF), term discrimination values, and relevancy feedback mechanisms. His book A theory of indexing explains many of his tests. Search today is still based on much of his theories. History of the search engine uses some of the same techniques even today.

Alan Emtage
In 1990 a student at McGill University in Montreal, by the name of Alan Emtage created Archie; the first search engine. It was invented to index FTP archives, allowing people to quickly access specific files. Archie users could utilize Archie’s services through a variety of methods including e-mail queries, telneting directly to a server, and eventually through the World Wide Web interfaces. Archie only indexed computer files. With Archie, Alan Emtage helped to solve the data scatter problem. Originally, it was to be named archives but was changed to Archie for short.

Paul Lindner and Mark P. McCahill
Archie gained such popularity that in 1991 Paul Linder and Mark P. McCahill created a text based information browsing system that uses a menu-driven interface to pull information from across the globe to the user’s computer. Named for the Golden Gophers mascot at the University of Minnesota, the name is fitting, because Gopher tunnels through other Gophers located in computers around the world, arranging data in a hierarchical series of menus, which users can search for specific topics.

Tim Berners-Lee
Up until 1991 until there was no World Wide Web.  The main method of sharing information was via FTP.  Tim Berners-Lee wanted to join hypertext with the internet.  He used similar ideas to those underlying the Enquire (a prototype created with help from Robert Cailliau) to create the World Wide Web, for which he designed and built the first web browser and editor, called WorldWideWeb, and developed on NeXTSTEP. He then created the first Web server called httpd, short for HyperText Transfer Protocol daemon.

The first Web site built was at: http://info.cern.ch/ and was first put online on August 6, 1991.  Tim Berners-Lee created the World Wide Web Consortium in 1994.  Tim also created the Virtual Web library which is the oldest catalogue of the web.  The history of the search engine is a fascinating story.

Website Creation Simplified

Brief Overview Of The Nuts And Bolts

Before you start fiddling around with HTML editors, FTP clients, and Domain registrations, it’s important to have at least a basic understanding of how all this works. This Website creation overview gives you an easy to understand look at what the process of building your own Website really involves.

A website is a collection of files that work together to form a unified whole. These various files, from images to HTML documents and PHP scripts, or instructional software, are organized by a Web browser and displayed appropriately on a computer monitor.

Website creation is essentially the process arranging information in a way that can be translated by Web browsing software, such as Internet Explorer, and presented to human viewers. To do this correctly you’ll need to gain a basic understanding of coding languages like HTML, CSS, and possibly PHP.

The process of coding your site is literally the activity of entering numerous individual lines of alphanumeric code that tell the Web browser how to format and display your Web page. While seemingly complex at first, the truth is learning the most basic Web development code — HTML — is less complicated than learning to use the English alphabet.

Once you learn what the various command codes actually do, your next step is to practice organizing them in a structured manner within an HTML document. This is not unlike the process of creating a word processing document and saving it; the only difference is that instead of sentences and paragraphs you’ll be entering HTML tags and attributes.

Once your files are complete, they’ll need to be added to your Web host so other Internet users can access them. The Web host, or server, is a powerful computer that operates around the clock.

It is here that all the files and data that make up your Website will be stored. And you’ll need to register a domain name and synchronize this domain with your host machine so people can type an easily remembered Web address into their browser and literally navigate to your Website by establishing a connection with your host server.

In addition to learning how to create and save basic HTML documents, Website creation requires some level of proficiency in transferring files between your computer and a Web server. This is called File Transfer Protocol, or FTP for short.

To do this, you’ll need a software tool called an FTP client. This utility is installed on your desktop and can instantly form a connection with your Web host, allowing you to upload files to the Web or download them to your machine.

It is also recommended that you become familiar with the directory structure and hosting control panel your Web host provides. This will make it easier for you to manage your Website.

This sounds like a lot of work. But the truth is the average Internet user can become basically versed in all of this within 30 days or less if he or she puts forth an effort to learn.

It’s beyond the scope of this article to delve into the specifics of any particular technique. But I hope at this point you at least have an understanding of what is involved in learning the Website Creation process.
Timothy Aaron Whiston – Quickly and easily learn Web design with the author’s amazing online course.   You’ll be an ace Webmaster in no time with this full-blown Web design course at your disposal.