hahahah amol hacked ur ip address

Sign by Danasoft - For Backgrounds and Layouts


Amol Bhure (ultra l33t) was born in Maharashtra, Seventh July Of Nineteen Hundred Nineteen Ninety A.D. He's currently pursuing his B.E in Bangalore. A cyber Security Professional, Hacker, Designer, Programmer. Keen interest in hacking and network security and he developed several techniques of defending and defacing websites. He's of the opinion that people should learn this art to prevent any cyber attacks. Currently Amol works as a member of 'Null International', Bangalore chapter as a network security guy. Apart from this, he has done internships at YAHOO! India, AMAZON India, etc. He has also attended various International conferences like NullCon GOA, c0c0n, ClubHack, Defcon , SecurityByte, ICFoCS, OWASP, etc.. He is certified with RHCE, LPT, CEH v7, SCJP, AFCEH. In programming he knows stuffs on C, C++, C# , JAVA (SCJP), .NET , and PHP. Additionally he knows few hardware languages like HDL, VHDL, Verilog, Embedded Micro controller Programming. He has been featured on google hall of fame. Amol was named a "India's top 10 hacker" by google. "World's top 50 hacking blog" by google.

Daily Page Views

Saturday, December 11, 2010


What you need:

  1. A Linux box
  2. Root on this Linux box (or a sympathetic admin)
  3. The ability to reboot this box several times a day
  4. A compilation environment installed
  5. Some way to get the kernel source

In more detail:

  1. Your Linux box needs to have at least 300 MB of free disk space. Find out how much disk space you have free with the command df -h. If you need tips for clearing out some disk space, ask. Note that you can compile on one Linux box and run the kernel you compile on another Linux box. You should compile on the fastest Linux box possible. A 500 Mhz Pentium III with 256 MB RAM will take around 40 minutes to compile a Linux kernel, a 1200 MHz Pentium III with 640 MB RAM will take around 10 minutes, whereas a Duron 800 Mhz with 256 MB RAM will take around 5 minutes.
    I've personally worked with x86's, Alpha's, and PowerPC's, but if you have something else I'm willing to help you figure it out. When asking questions, be sure to let people know if you are compiling for something other than an x86.
  2. You don't _need_ root, but you at least need someone with root to set you up. What you need is the ability to boot your new kernel, which may include the ability to run LILO, to copy kernels to places outside your home directory, and the ability to reboot the machine. I do 99% of my kernel work as a non-root user. I compile in a directory owned by user val, and I made the directories I need to write to owned by user val. You can also give the user reboot a password, so you can reboot the machine without su-ing to root. Talk to your sysadmin to find out what you need to do.
  3. Ideally, your Linux box is something only you need. If other people need it during the day but don't at night, you can probably still use it. If you are concerned about making the box unusable, there are many ways to test a new kernel and then reboot back into your old kernel. Disk corruption is not as common as you think, but if you do not want to risk the data on your hard disk, you can use a ramdisk with your new kernel. (Ramdisks are explained later on.) You can also create a new partition on your hard disk and use that as your root filesystem when you are experimenting with new kernels.
  4. Hopefully, most people installed their Linux box with gcc and other compilation tools included. You can test to see if you have a compilation environment installed by copying the following into a file named hello.c:
    #include <stdio.h>
       main (void)
     printf ("Hello, world!\n");
     return 0;
    And then compiling and running it with:
    $ gcc hello.c -o hello
       $ ./hello
    If this doesn't work, you will need to install a compilation environment. Usually the easiest way to do this is to just re-install your machine.
  5. There are many different ways to get the kernel source, but my preferred one involves having a reasonably high bandwidth net connection. Otherwise, you can get the kernel source from your distribution CD's.
    Remember, if you have any questions, please ask them on the grrls-only list, instead of emailing someone privately. The list members will answer your questions politely and helpfully, and other list members will learn from the answers to your questions. Our goal is to never answer a question with "FAQ, read #14 on XYZ," or "Do a web search," or "RTFM." Communicating with other kernel developers is just as important as finding the answer to your question.
    Get the Source
    Now, you need to get the Linux source. There are many different ways of getting the Linux source, but I'll list them in order of preference. You only need to choose one of these options. While it's easiest to just install the source on your distribution CD, you'll need to stay up to date with the latest version of the Linux source if you want to participate in kernel development.
    Linus recommends not having a /usr/src/linux symbolic link to your kernel source tree. However, the /usr/src/linux link will sometimes be used as a default by some modules that compile outside the kernel. If you need it, make sure it points to the correct kernel tree. If it doesn't, modules that compile outside the kernel may use the wrong header files.
  6. Use BitKeeper to clone a Linux source repository. This is what I use, and it's what many other kernel hackers use, including the entire PowerPC team, Ted T'so, Rik van Riel, and Linus himself. Be warned, this requires downloading many megabytes of data, so don't use it if you have a slow link. This option also requires at least 1 GB of free disk space, which is significantly more than the other options.
  7. First, get BitKeeper: http://www.bitmover.com/cgi-bin/download.cgi Follow the instructions and it will tell you how to download and install BitKeeper. Then, clone the main Linux tree using BitKeeper                         $ cd /usr/src (or wherever you would like to work)   $ bk clone bk://linux.bkbits.net/linux-2.6 linux-2.6 $ cd linux-2.6 $ bk -r co
  8. And now you have a kernel source tree! Learn more about how to use BitKeeper here:
    1. FTP the kernel source from ftp.kernel.org. This is another bandwidth intensive operation, so don't use it if you have a slow link.
  9. FTP to your local kernel mirror, which is named:
    ftp.<country code>.kernel.org
    For example, I live in the US, so I do:
    $ ftp ftp.us.kernel.org
    Login (if necessary) with username ftp or anonymous. Change directory to pub/linux/kernel/v2.6 and download the latest kernel. For example, if the latest version is 2.6.9, download the file named:
    Usually there is a file named LATEST-IS-<version> to tell you what the latest version is. I recommend the gzipped format (filename ending in .gz) instead of bzip2 format (filename ending in.bz2) since bzip2 takes a long time to decompress.
    Untar and uncompress the file in the directory where you are planning to work.
    You now have your Linux kernel source.
    1. Install the kernel source from your distribution CD. If you already have a directory named /usr/src/linux and it contains more than one directory, you already have the source installed. If you don't, read on.
  10. Mount your installation CDROM. On a RedHat based system, the source RPM is usually in <DistributionName>/RPMS/ and is named:
    One way to find the kernel source package is to run this command, assuming your CD is mounted at /mnt/cdrom:
    $ find /mnt/cdrom -name \*kernel-\*
    Install the RPM using:
    $ rpm -iv <pathname>/kernel-source-<version>.<arch>.rpm
    The v switch will tell you if it fails or not.
    If your system is not RPM-based, please let us know how to install the kernel source from your distribution CD.
    • There are various other ways to get the kernel source, involving CVS or rsync. If you would like to write instructions for one of these other methods of getting the kernel source, go ahead and we'll include them here.
    Why do I recommend BitKeeper over ftp, and ftp over vendor source? BitKeeper handles creating patches for you. (A patch contains the differences between one source tree and another source tree.) BitKeeper also applies the latest changes for you. When I want to update my tree, I just type:
    $ bk pull
    And most of the time, it just works. The only time I have to do work is if I have written code that conflicts with the new code downloaded from the parent tree. When I want to make a patch to send to somebody, I just type:
    $ bk citool
       $ bk -hr<latest revision number> | bk gnupatch > newpatch
    When I want to undo my latest changes, I type:
    $ bk undo -r<latest revision number>
    BitKeeper has all kinds of pretty GUI tools to make debugging and merging code easier. I like BitKeeper so much I wrote a paper about it. You can find the paper and some slides on my web page: http://www.nmt.edu/~val I prefer the vanilla kernel source from ftp.kernel.org over the vendor source because you never know what changes the vendor has made to the source. Most of the time, the vendor has improved the tree, but often they make changes of dubious value to a kernel hacker. For example, RedHat 6.2 shipped a kernel that compiled differently depending on whether you were running an SMP kernel at the time of compilation. This makes sense if you are just recompiling a kernel for that machine, but it was useless if you were trying to compile a kernel for a different machine. The other reason to use vanilla source instead of vendor source is if you want to create and send patches to other kernel developers. If you create a patch and send it to the Linux kernel mailing list for inclusion in the main kernel tree, it had better be against the latest vanilla source tree. If you create a patch on a vendor source tree, it's unlikely to apply to a vanilla source tree. The one place where vendor source is crucial is for non-x86 architectures. The vanilla source almost never builds and boots on an architecture other than x86. Often, the best place to get a working kernel for a non-x86 is the vendor source. Each architecture usually has some quirky way of getting the latest source in addition to the vendor-supplied source.
  11. Configure Your Kernel
    The Linux kernel comes with several configuration tools. Each one is run by typing make <something>config in the top-level kernel source directory. (All make commands need to be run from the top-level kernel directory.)
    1. make config

    2. This is the barebones configuration tool. It will ask each and every configuration question in order. The Linux kernel has a LOT of configuration questions. Don't use it unless you are a masochist.
    Each of the configuration programs produces these end products:
    1. A file named .config in the top-level directory containing all your configuration choices.
    If you have a working .config file for your machine, just copy it into your kernel source directory and run make oldconfig. You should double check the configuration with make menuconfig. If you don't already have a.config file, you can create one by visiting each submenu and turning on or off the options you need. menuconfig and xconfig have a "Help" option that shows you the Configure.help entry for that option, which may help you decide whether or not it should be turned on. RedHat and other distributions often include sample .config files with their distribution specific kernel source in a subdirectory named configs in the top-level source directory. If you are compiling for PowerPC, you have a variety of default configs to choose from inarch/ppc/configsmake defconfig will copy the default ppc config file to .config. If you know another source of default configuration files, let us know.
    Tips for configuring a kernel:
    1. Always turn on "Prompt for development... drivers".
    2. From kernel 2.6.8, you can add your own string (such as your initials) at "Local version - append to kernel release" to personalize your kernel version string (for older kernel versions, you have to edit the EXTRAVERSION line in the Makefile).
    3. Always turn off module versioning, but always turn on kernel module loader, kernel modules and module unloading.
    4. In the kernel hacking section, turn on all options except for "Use 4Kb for kernel stacks instead of 8Kb". If you have a slow machine, don't turn on "Debug memory allocations" either.
    5. Turn off features you don't need.
    6. Only use modules if you have a good reason to use a module. For example, if you are working on a driver, you'll want to load a new version of it without rebooting.
    7. Be extra sure you selected the right processor type. Find out what processor you have now with cat /proc/cpuinfo.
    8. Find out what PCI devices you have installed with lspci -v.
    9. Find out what your current kernel has compiled in with dmesg | less. Most device drivers print out messages when they detect a device.
    If all else fails, copy a .config file from someone else.
  12. Compile Your Kernel
    At this point, you have your kernel source, and you've run one of the configuration programs and (very important) saved your new configuration file. Don't laugh - I've done this many times.
    First, build the kernel:
    On an x86:
    # make -j<number of jobs> bzImage
    On a PPC:
    # make -j<number of jobs> zImage
    Where <number of jobs> is two times the number of cpus in your system. So if you have a single cpu Pentium III, you'd do:
    # make -j2 bzImage
    What the -j argument tells the make program is to run that many jobs (commands in the Makefile) at once. make knows which jobs can be run at the same time and which jobs need to wait for other jobs to finish before they can run. Kernel compilation jobs spend enough time waiting for I/O (for example, reading the source file from disk) that running two of the jobs per processor results in the shortest compilation time. NOTE: If you get a compile error, runmake with only one job so that it's easy to see where the error occurred. Otherwise, the error message will be hidden in the output from the other makejobs.
    If you have enabled modules, you'll need to compile them with the command:
    # make modules
    If you are planning on loading these modules on the same machine they are compiled on, then run the automatic module install command. But FIRST - save your old modules!
    # mv /lib/modules/`uname -r` /lib/modules/`uname r`.bak
       # make modules_install
    This will put all the modules in subdirectories of the/lib/modules/<version> directory. You can find out what <version> is by looking in include/linux/version.h.

    Recompiling the kernel

    So, now you've compiled the kernel once. Now you want to change part of the kernel and recompile it. What do you need to do?
    In most cases, simply running make -j2 bzImage (or whatever your kernel compile command is) again will do the trick. If you've altered a module's source file, then just do make modules and make modules_install (if appropriate).
    Sometimes, you'll change things so much that make can't figure out how to recompile the files correctly. make clean will remove all the object and kernel object files (ending in .o and .ko) and a few other things. make mrproperwill do everything make clean does, plus remove your config file, the dependency files, and everything else that make config creates. Be sure to save your config file in another file before running make mrproper. Afterwards, copy the config file back to .config and start over, beginning atmake menuconfig. A make mrproper will often fix strange kernel crashes that make no sense and strange compilation errors that make no sense.
    Here are some tips for recompilation when you are working on just one or two files. Say you are changing the file drivers/char/foo.c and you are having trouble getting it to compile at all. You can just type (from thedrivers/char/ directory) make -C path/to/kernel/src SUBDIRS=$PWD modules and make will immediately attempt to compile the modules in that directory, instead of descending through all the subdirectories in order and looking for files that need recompilation. Ifdrivers/char/foo.c is a module, you can then insmod thedrivers/char/foo.ko file when it actually does compile, without going through the full-blown make modules command. If drivers/char/foo.cis not a module, you'll then have to run make -j2 bzImage to link it with all the other parts of the kernel.
    The && construct in bash is very useful for kernel compilation. The bash command line:
    # thing1 && thing2
    Says, "Do thing1, and if it doesn't return an error, do thing2." This is useful for doing something only if a compile command succeeds.
    When I'm working on a module, I often use the following command:
    # rmmod foo.ko && make -C path/to/kernel/src SUBDIRS=$PWD modules \
         && insmod foo.ko
    This says, "Remove the module foo.ko, and if that succeeds, compile the modules in drivers/char/, and if that succeeds, load the kernel module object file." That way, if I get a compile error, it doesn't load the old module file and it's obvious that I need to fix my code.
    Don't let yourself get too frustrated! If you just can't figure out what's going on, try the make mrproper command. If you still can't figure it out, post your question on grrls-only. It's good to try to figure things out by yourself, but it's not good if it makes you give up trying at all.
  13. Understanding System Calls
    By now, you're probably looking around at device driver code and wondering, "How does the function foo_read() get called?" Or perhaps you're wondering, "When I type cat /proc/cpuinfo, how does the cpuinfo()function get called?"
    Once the kernel has finished booting, the control flow changes from a comparatively straightforward "Which function is called next?" to being dependent on system calls, exceptions, and interrupts. Today, we'll talk about system calls.

    What is a system call?

    In the most literal sense, a system call (also called a "syscall") is an instruction, similar to the "add" instruction or the "jump" instruction. At a higher level, a system call is the way a user level program asks the operating system to do something for it. If you're writing a program, and you need to read from a file, you use a system call to ask the operating system to read the file for you.

    System calls in detail

    Here's how a system call works. First, the user program sets up the arguments for the system call. One of the arguments is the system call number (more on that later). Note that all this is done automatically by library functions unless you are writing in assembly. After the arguments are all set up, the program executes the "system call" instruction. This instruction causes an exception: an event that causes the processor to jump to a new address and start executing the code there.
    The instructions at the new address save your user program's state, figure out what system call you want, call the function in the kernel that implements that system call, restores your user program state, and returns control back to the user program. A system call is one way that the functions defined in a device driver end up being called.
    That was the whirlwind tour of how a system call works. Next, we'll go into minute detail for those who are curious about exactly how the kernel does all this. Don't worry if you don't quite understand all of the details - just remember that this is one way that a function in the kernel can end up being called, and that no magic is involved. You can trace the control flow all the way through the kernel - with difficulty sometimes, but you can do it.

    A system call example

    This is a good place to start showing code to go along with the theory. We'll follow the progress of a read() system call, starting from the moment the system call instruction is executed. The PowerPC architecture will be used as an example for the architecture specific part of the code. On the PowerPC, when you execute a system call, the processor jumps to the address 0xc00. The code at that location is defined in the file:
    It looks something like this:
    /* System call */
            . = 0xc00
            EXC_XFER_EE_LITE(0xc00, DoSyscall)
    /* Single step - not used on 601 */
            EXCEPTION(0xd00, SingleStep, SingleStepException, EXC_XFER_STD)
            EXCEPTION(0xe00, Trap_0e, UnknownException, EXC_XFER_EE)
    What this code does is save some state and call another function calledDoSyscall. Here's a more detailed explanation (feel free to skip this part):
    EXCEPTION_PROLOG is a macro that handles the switch from user to kernel space, which requires things like saving the register state of the user process.EXC_XFER_EE_LITE is called with the address of this routine, and the address of the function DoSyscall. Eventually, some state will be saved andDoSyscall will be called. The next two lines save two exception vectors on the addresses 0xd00 and 0xe00.
    EXC_XFER_EE_LITE looks like this:
    #define EXC_XFER_EE_LITE(n, hdlr)       \
            EXC_XFER_TEMPLATE(n, hdlr, n+1, COPY_EE, transfer_to_handler, \
    EXC_XFER_TEMPLATE is another macro, and the code looks like this:
    #define EXC_XFER_TEMPLATE(n, hdlr, trap, copyee, tfer, ret)     \
            li      r10,trap;                                       \
            stw     r10,TRAP(r11);                                  \
            li      r10,MSR_KERNEL;                                 \
            copyee(r10, r9);                                        \
            bl      tfer;                                           \
    i##n:                                                           \
            .long   hdlr;                                           \
            .long   ret
    li stands for "load immediate", which means that a constant value known at compile time is stored in a register. First, trap is loaded into the register r10. On the next line, that value is stored on the address given by TRAP(r11).TRAP(r11) and the next two lines do some hardware specific bit manipulation. After that we call the tfer function (i.e. thetransfer_to_handler function), which does yet more housekeeping, and then transfers control to hdlr (i.e. DoSyscall). Note thattransfer_to_handler loads the address of the handler from the link register, which is why you see .long DoSyscall instead of bl DoSyscall.
    Now, let's look at DoSyscall. It's in the file:
    Eventually, this function loads up the address of the syscall table and indexes into it using the system call number. The syscall table is what the OS uses to translate from a system call number to a particular system call. The system call table is named sys_call_table and defined in:
    The syscall table contains the addresses of the functions that implement each system call. For example, the read() system call function is namedsys_read. The read() system call number is 3, so the address ofsys_read() is in the 4th entry of the system call table (since we start numbering the system calls with 0). We read the data from the addresssys_call_table + (3 * word_size) and we get the address ofsys_read().
    After DoSyscall has looked up the correct system call address, it transfers control to that system call. Let's look at where sys_read() is defined, in the file:
    This function finds the file struct associated with the fd number you passed to the read() function. That structure contains a pointer to the function that should be used to read data from that particular kind of file. After doing some checks, it calls that file-specific read function in order to actually read the data from the file, and then returns. This file-specific function is defined somewhere else - the socket code, filesystem code, or device driver code, for example. This is one of the points at which a specific kernel subsystem finally interfaces with the rest of the kernel. After our read function finishes, we return from thesys_read(), back to DoSyscall(), which switches control toret_from_except, which is in defined in:
    This checks for tasks that might need to be done before switching back to user mode. If nothing else needs to be done, we fall through to the restorefunction, which restores the user process's state and returns control back to the user program. There! Your read() call is done! If you're lucky, you even got your data back.
    You can explore syscalls further by putting printks at strategic places. Be sure to limit the amount of output from these printks. For example, if you add aprintk to sys_read() syscall, you should do something like this:
    static int mycount = 0;
      if (mycount < 10) {
               printk ("sys_read called\n");
    Have fun!
Post a Comment