Tagged: Informative Toggle Comment Threads | Keyboard Shortcuts

  • Kartik 8:06 PM on February 15, 2013 Permalink
    Tags: , , Informative, , , , ,   

    LaTeX Project Report Template Updated 

    The LaTeX logo, typeset with LaTeX

    The LaTeX logo (Photo credit: Wikipedia)

    This will be quite short.

    Many have told me they found the LaTeX Project Report Template that I published about two years back pretty useful. Today, I am releasing an update for the same. This includes a fix for a nasty bug in the original template apart from some layout changes.

    Fork it from Github at https://github.com/k4rtik/latex-project-report-template to start creating your project reports. Visit http://k4rtik.github.io/latex-project-report-template/ to get started. (I just discovered how cool Github pages are!)

    Change Summary

    1. Changed base template to a technical project report.
    2. Added example for inserting an image.
    3. Fixed numbering bug with references and acknowledgement pages.

    Please shout out in the comments section if you found this useful. 🙂

     
    • sudheendra chari 10:49 PM on February 15, 2013 Permalink | Reply

      thanks man, gonna use this for my major project report 🙂

  • Kartik 12:00 AM on July 14, 2012 Permalink
    Tags: , , Informative, , , tools, , YouTube   

    Best Tool for Downloading YouTube Videos and Playlists 

    How many times does it happen that you stumble upon an awesome youtube video, you make up your mind to download it but fail short to find a good tool which actually works?

    In this post I want to introduce one of the best tools for downloading videos from YouTube and other video platforms – youtube-dl. This command line tool is basically just a python script released by its developers in public domain and it works on any platform that supports Python 2.x or later including Linux, Windows or Mac OS X.

    If you are on Ubuntu, you can directly install it by using:

    k4rtik: $ sudo apt-get install youtube-dl

    If not, just go to youtube-dl download page and save the script in your home directory. Then make the script executable by using the following command:

    k4rtik: $ chmod +x youtube-dl

    You can also consider moving the script to your /usr/local/bin directory to invoke it directly without specifying the path:

    k4rtik: $ sudo mv youtube-dl /usr/local/bin

    In its most basic form you can use this tool to download the highest quality of video by just supplying the url of the video as its argument, e.g.:


    k4rtik: Videos $ youtube-dl http://www.youtube.com/watch?v=Rk62hRBDLGc
    Setting language
    Rk62hRBDLGc: Downloading video webpage
    Rk62hRBDLGc: Downloading video info webpage
    Rk62hRBDLGc: Extracting video information
    [download] Destination: Rk62hRBDLGc.flv
    [download] 100.0% of 5.38M at 88.81k/s ETA 00:00
    k4rtik: Videos $

    My preferred way of downloading videos is by using the -t flag, which saves the video with the title in the file name, e.g.:


    k4rtik: Videos $ youtube-dl -t http://www.youtube.com/watch?v=Rk62hRBDLGc
    Setting language
    Rk62hRBDLGc: Downloading video webpage
    Rk62hRBDLGc: Downloading video info webpage
    Rk62hRBDLGc: Extracting video information
    [download] Destination: Reasons_to_love_Ubuntu_12_04_LTS-Rk62hRBDLGc.flv
    [download] 100.0% of 5.38M at 204.52k/s ETA 00:00
    k4rtik: Videos $

    In case, you don’t want to download the highest quality to save some bandwidth and time, youtube-dl offers choice of multiple formats. See its man-page to find out more, I prefer H264 videos at 480p size for youtube which I specify like this:


    k4rtik: Videos $ youtube-dl -t -f 18 www.youtube.com/watch?v=02nBaaIoFWU
    Setting language
    02nBaaIoFWU: Downloading video webpage
    02nBaaIoFWU: Downloading video info webpage
    02nBaaIoFWU: Extracting video information
    [download] Multipath_TCP-02nBaaIoFWU.mp4 has already been downloaded
    k4rtik: Videos $

    You can also use it for downloading complete playlists off youtube:

    k4rtik: Videos $ youtube-dl -f 18 -t http://www.youtube.com/watch?v=128ll4yXUfY&list=PL2E1848DB88958935

    Oh, and did I mention… it continues the download if interrupted due to network trouble or some other reason, just specify the same command again.

    According to its man page it supports video downloads from other popular video hosting sites like Facebook, Metacafe, Vimeo, Yahoo!, YouTube, blip.tv, video.google.com and many more. Hope you enjoy using it for all your video downloading needs.

    Ciao

    K4rtik

    This article first appeared at http://www.digimantra.com/tutorials/best-tool-for-downloading-youtube-videos-and-playlists/

     
    • Jishnu 4:44 PM on July 17, 2012 Permalink | Reply

      Here is an alternate browser based solution which I use. http://userscripts.org/scripts/show/25105.

      • Kartik 6:49 PM on July 17, 2012 Permalink | Reply

        Yeah, I had installed that script as well. But sometimes I just prefer putting the terminal on work and consider it more reliable in case of failed downloads. 🙂

    • Vijeenrosh P.E 10:30 PM on September 22, 2012 Permalink | Reply

      I was beginnig to wonder could a play with wget , curl and some shell scripting can make a script like that 🙂

    • Deepak Malani 2:40 PM on November 20, 2012 Permalink | Reply

      Used this. good utility. Wondering if this will need regular updates, as was the case with some earlier youtube downloaders (that I used in Windows). BTW, is youtube okay with people downloading its videos (in its terms & conditions)?

      • Kartik 3:19 PM on November 20, 2012 Permalink | Reply

        This actually gets updated pretty often. The download page currently lists the latest version from October 9 – http://rg3.github.com/youtube-dl/download.html

        My opinion is those are not really Youtube’s videos, they are people’s videos. As long a user doesn’t mind sharing the link of their uploaded video publicly, Youtube shouldn’t exert a problem with downloads. It’s another matter that they want visitors to view videos from their site, so they choose not to provide any explicit download video link. Anyhow, this will slowly change with HTML5 video tag, which comes by default with download video option. 🙂

    • Personal Site 7:33 AM on April 25, 2013 Permalink | Reply

      Hello! I know this is somewhat off topic but I was wondering which blog platform
      are you using for this site? I’m getting fed up of WordPress because I’ve had problems with hackers and I’m looking at alternatives for another platform. I would be fantastic if you could point me in the direction of a good platform.

      • Kartik 9:13 AM on April 25, 2013 Permalink | Reply

        As you can gather from the url of this site, I am using WordPress itself. Haven’t had any problems of attacks, not atleast on wordpress.com hosting. YMMY on self hosting wordpress.

  • Kartik 1:09 AM on June 23, 2012 Permalink
    Tags: , , Capture the flag, , ethical hacking, , Hacking, Informative, KaNiJe, Meetup, , Security,   

    A Fun Security Weekend with null and sCTF 

    I know it’s quite late to post about the last weekend when another weekend is around the corner, but couldn’t control myself as the last one was so eventful. 🙂

    It was almost 2 weeks back that I got to know about sCTF 2012. I have always wanted to learn about computer security (and the darker side of hacking), but haven’t been able to give time it to it. What followed was a quick search for team members from among my batch via FB – Nithin immediately showed interest, we pulled Jerrin in too. We quickly registered ourselves as Team KaNiJe (sounds Manga-ish right?) after calling Vinod Pathari Sir and convincing him to become our mentor.

    We were handed over the first set of questions for Round 1 via email. What I liked most about sCTF and its organizers was that they focused on being newbie friendly and were maintaining a decent level of quality with the contest. This was demonstrated in the Round 1 (christened – Learning Round by them) questions. They ranged from basics like installing VirtualBox, learning basics of PHP and SQL, and going up to buffer overflow exploits and reverse engineering. The sets of tasks were many and very few days with us – June 16 was the deadline. During this period I enjoyed hacking the basic missions at hackthissite.org, learnt a lot about iptables – default linux kernel firewall, buffer overflows, etc. I also went through my study report prepared for my Networks course assignment on common networking tools like ping, ssh, traceroute, ifconfig, netstat, wireshark, etc. to recall useful stuff and then tried to familiarize myself with the ethical hacking parlance using the suggested flashcards.

    I also happened to attend null Bangalore’s monthly meetup on Saturday (16th) and, need I mention, this was THE best community meetup I have ever attended! I got to learn basic SQL injection, some JavaScript obfuscation techniques and some memory forensics basics, the last one was arguably the best session in the meetup. Through the meetup I got in touch with an MCA alumnus from my college – Shruti (who apparently knew me by name already) and then enjoyed a buffet at a nearby restaurant with her friends (a gang of 6 white-hat hackers!). I was astonished to discover a whole new (for me) world of security professionals in India and how deeply they enjoy their work. This will definitely keep me interested in security area for a while, more so because I will be taking Vinod Pathari Sir’s elective on Computer Security in the coming sem. Sadly, I was unable to attend BangPypers June Meetup due to approaching deadline of sCTF’s first round.

    Earlier, on 15th night, we had divided the tasks among ourselves with 2 sections for each. On 16th afternoon, Jerrin and me met at CIS to finish up our submission for the first round, Nithin was collaborating from his home at Trivandrum. We had about 3 hours remaining for the deadline and I was yet to start on my sections (the lazy procrastinator that I am; and there had been a confusion about extension of deadline to add to my procrastination). My sections were Part 2 (mysql, apache, hardening, log file, php log file etc) and Part 4 (secure coding, attacks). Given my experience sysadmining for about past three years, it didn’t take me more than an hour to finish up the first section (of course, there were new things to learn as well). The other section was more of a problem with the time constraint but I managed to do most of it. Just near the deadline of 7 pm we submitted our partial solutions (the poor reverse engineering section was left blank completely!) and parted for the day.

    The next day was the second round (also online), scheduled from 10am to 4pm and which along with round 1 would decide our qualification for the finals. I reached CIS at 10 and logged in to the contest portal, Jerrin joined in soon and Nithin too remotely. There were questions divided into multiple sections – Crypto, Web, Binary, and Trivia. We got a good lead in the beginning when Nithin solved the first two in Binary section. I started with Trivia and found it fairly easy (Google was our assistant for that section 😉 ) in the beginning, but really got stuck at two questions in that section. Jerrin was solving Crypto questions one by one. The fun part was that all the teams and organizers were connected together with irc. We could ask doubts from them and they kept us entertained with their irc bot, live announcements of score board, and poking fun at each other and us. So, after a while I discovered that organizers had done a minor mistake which led to our advantage (I managed to finish those nasty 2 remaining Trivia questions) and put us on top for a while on the rankings. The next 2-3 hours were spent struggling on remaining questions with little progress and we ended up at rank 5 among the total 18 teams that were present.

    Two days later, we were informed via mail that we had qualified for the finals! And that we were fully sponsored to attend the first International Conference on Security of Internet of Things to be held from August 16 to 19 at Amritapuri campus. The final round of the contest will be held on Aug 20 after the conference. I was overly excited because I was not aware that we were eligible to attend the conference just by qualifying for the finals. According to Vinod Sir it will be great to listen to Ross Anderson who is speaking at the conference. Looking forward for a great experience at our first academic conference (and lots of learning in the field of security to prepare for the finals). 🙂

     
    • bithin 10:04 PM on June 23, 2012 Permalink | Reply

      Nice to hear that you enjoyed the contest 🙂 Thanks a lot 🙂

    • pprahul 11:56 PM on June 23, 2012 Permalink | Reply

      Great experience ah man!! Awesome!!

  • Kartik 10:12 PM on June 13, 2012 Permalink
    Tags: , , , , , , hardware hacker, Informative, Open Hardware, , ,   

    (Bangalore) Summer of ’12… with BeagleBone 

    BeagleBone

    This post will be slightly long. Lots of exciting things happening over a Bangalore summer this year for me. 😀

    Somehow I always wanted to learn more about hardware and with a mentor like Khasim the road seems a lot more exciting. I first met him when he came to conduct a workshop on BeagleBoard during Tathva 2009 at my college – NIT Calicut. I was just a fresher then and have since regretted that I could not attend that workshop completely (due to my participation in various CS related competitions).

    Well, life took strange turns and I along with friend Jerrin landed up in Bangalore and got to hack together on a BeagleBone (a low-cost, high-expansion hardware-hacker focused BeagleBoard). We initially learnt the very basics of working with a board like this using the serial output on UART console (and discovered that we couldn’t proceed further until R219 was plucked out, thanks to another mentor Mr. Satish Patel from Khasim’s team; fiola on #beagle channel on freenode was a great help in troubleshooting as well), then there was Starterware which enabled us to experiment with blinking LEDs and other small programs for Bone.

    I then learnt how to read a schematic using the great book by Barr & Massa which Amarjit Singh suggested (now I will recommend this book, Programming Embedded Systems, as a TO-READ if you want to learn basics of embedded systems programming) and tried to understand the schematics of BeagleBone (rev. A4). I was able to identify how various components on the board connect to the processor and the direction of data flow among them and to understand how simple things like power reset, user LEDs, SDRAM, USB host & connector, microSD and expansion slots interact with the CPU.

    Exploration of the design specifications of the board with some details about each external peripheral present on the board from the BeagleBone System Reference Manual followed. I even tried to read ARM335x datasheet and Technical Reference Manual to extract useful information (like memory locations of on-chip peripherals, handling of interrupts at CPU level, etc.) – datasheets are HUGE documents! Using this data, referring the book by Barr & Massa and taking help from Starterware example programs I was able to write my (own) code from scratch for blinking an LED on BeagleBone as a pure learning exercise – believe me it was total fun (no matter however it may sound in this post)!

    Just today, I got my hands on Microchip’s Accessory Development Starter Kit for Android (pictured below). I will be using this to understand the ins and outs of Android’s Open Accessory Protocol and try to port the firmware on BeagleBone such that it could be used as an ADK platform as well. Lots of learning, fiddling with USB APIs, Android hacking, and of course embedded C programming to follow next (and I am up for the game!).

    Here are some pics of the awesome things I am playing with these days (click on image for larger view):

    I will try to regularly post about my progress here and yes, there is a lot more I have to say about this Bangalore Summer, but some other post, some other time. 🙂

    Ciao

    k4rtik

     
    • appu sajeev 10:30 PM on June 13, 2012 Permalink | Reply

      from where did u buy the beaglebone?

      • Kartik 12:43 AM on June 14, 2012 Permalink | Reply

        Did not buy, procured! From my mentor. 🙂

    • Sajjad Anwar (@geohacker) 12:16 AM on June 14, 2012 Permalink | Reply

      Yay! Super excited to know that you are enjoying your time in Bangalore! Good luck 🙂

      • Kartik 12:45 AM on June 14, 2012 Permalink | Reply

        Thanks. And it’s because of you and so many other people I am meeting here in Bangalore. 🙂

    • Pranav 9:50 AM on June 14, 2012 Permalink | Reply

      Awesomeness 😀

    • Pramode 10:02 PM on June 15, 2012 Permalink | Reply

      Have fun hacking the BeagleBone (and other stuff)!!

      • Kartik 10:06 PM on June 15, 2012 Permalink | Reply

        Yes, loving it.
        And this time I would really love if you could visit our campus for a workshop on hardware hacking. We two would be able to assist too. 🙂

  • Kartik 1:29 AM on March 27, 2012 Permalink
    Tags: , Computer engineering, Computer science, , , Informative, , , ,   

    Ponderings of the Year Past: Part 1 

    It’s been long since I wrote something. To be frank, it was last July in which I actually spoke out my mind. Of course, I wished to write a lot more (who doesn’t). This one is going to be long, so I decided to partition it.

    The following are few important lessons I learnt over the gone year – 2011:

    Computer Science != Computer Engineering

    Vineeth Sir always calls it Computing Science, we even had a discussion over it, perhaps over a year back( and once again recently); the doubt got cleared up only after I looked up ACM’s Computing Curricula Recommendations which defines five sub-disciplines of computing: Computer Science, Computer Engineering, Information Systems, Information Technology, and Software Engineering (they have pretty decent comparisons among these disciplines using pictures and charts). The story didn’t end there though, I was yet to realize what repercussions this seemingly subtle difference had in store for me.  Some people like Nandu (had an hour-long discussion with him months back) would call it the Department’s shortcoming that it is too focused on the Science part of the field, which leads to lesser interest in technology among students and hence, lack of quality of academic (or otherwise) projects. Anyhow, for me this was the year of actually realizing this difference – with low scores and finding lesser interest in some of the core theory subjects including Data Structures and Algorithms (not all though – I loved Discrete Computational Structures and managed to secure an ‘S’). Perhaps I always wanted to be a Computer Engineer! I was always the tech-geek with basic know-how of (almost) everything, mostly counting on my experience, trying to help others and having the fondness for cryptic syntax, etc; but over time it dawned on me that that’s not all – theory subjects tend to be more intellectually inclined (or research-focused if you may), and as Murali Sir frequently points out – it should be a matter of choice and not inability to prefer systems over theory in CS. Perhaps the most important thing that I missed to do was reading owing to over-confidence and a fuzzy image of self.

     

    Lot more to say. Part 2 coming out soon.

     
  • Kartik 8:40 AM on February 12, 2012 Permalink
    Tags: , , , , Informative, , , , , , ,   

    Control All Computers in a Lab from a Single System 

    Quoting Dhandeep, our super-cool lab-admin:

    now , all 70 systems in the lab can be switched on and switched off by single commands from the hostel…

    Yes, that and a lot more is possible in our Software Systems Lab now. How? Read on…

    The Setup

    We have over 70 systems with Ubuntu 10.04 LTS installed on them. There is an administrative account (let’s call it admin for this post) and a guest (limited privilege) account on each. Needless to say, admin password is known only to admins and guest password is known to all who use the lab. All these systems are configured to be able to controlled remotely (read: OpenSSH server is installed on each).

    Basic Idea

    1. Log in via SSH without a password
    2. Write your desired command and run it in background
    3. Run the above in a loop for the lab’s subnet.

    Detailed Steps

    See Tips for Remote Unix Work (SSH, screen, and VNC) for the first step (and for more immensely useful tips on remote usage of *NIX systems).

    For Step 2, here is one example command:

    ssh -t admin@labsystem "echo  | sudo -S shutdown -h now" &

    In the above command labsystem is usually replaced with an IP address like 192.168.xxx.xxx and the <pass> with the password of the admin account.

    WARNING: it’s not suggested to use the above command out in the open to save the password from prying eyes; also note that for additional security, you need to take a measure to make sure this is not saved in bash history or if the command is in a script, it’s not accessible to others.

    The requirement of ampersand at the end depends on particular usage (if you want to run, let’s say,  uptime command over ssh, you would not want the output to go to background, or you can redirect the output to some file). Putting the process in background, in this case, will help in the next step.

    The -S switch for sudo makes it possible to supply the password via stdin (we had discovered this switch from sudo’s man page, but didn’t manage to conclude “echo pass |” will do the trick until we discovered it at StackOverflow)

    Step 3: use your favorite scripting language (bash, python, etc.) and run the above command for all the systems of your lab subnet. An example in bash:

    for ip in {101..180}
    do
    	ssh -t admin@192.168.xxx.$ip "echo  | sudo -S shutdown -h now" &
    done
    

    The above code snippet will run the desired command for all systems in subnet within the IP range 192.168.xxx.101 to 192.168.xxx.180. Now, you can clearly see how putting the process in the background will help – the next iteration of the loop need not wait for the command in previous iteration to finish!

    In the passing, here’s a small video I shot featuring Dhandeep when he got all excited to see this working:

    That’s it. Try this out, share your tricks and have some *NIX fun in your lab. 🙂

    PS: I have not covered how systems can be switched on with this setup. It basically involves broadcasting a magic packet to the subnet. Hope Dhandeep comes up with a blog post on that soon. 😉 Here it is: On the push of a button..

    Ciao

    Kartik

     
    • firesofmay 8:53 AM on February 12, 2012 Permalink | Reply

      Sweet! I love it! 😉

    • Amarnath 8:54 AM on February 12, 2012 Permalink | Reply

      Interesting. But, I think you forgot to mention the important prerequisite for doing this task. Don’t you need to generate public keys for all machines to be controlled and pass it to the central control node? I believe only this would help in password-less remote login via SSH.

      Indeed Dhandeep seems to be pretty excited about it. 🙂

      Cheers

      Amarnath

      • Kartik 10:03 AM on February 12, 2012 Permalink | Reply

        Thanks for your comment Amarnath.

        Indeed, that is necessary and is mentioned as the first step. But instead of describing the whole process myself I chose to point to a good resource (Tips for Remote UNIX Work…) for that kind of setup. You missed out perhaps. 😉

    • Lokesh Walase 5:41 PM on February 12, 2012 Permalink | Reply

      Awesome !! 🙂

    • Imran 11:04 PM on February 13, 2012 Permalink | Reply

      You can use puppet to design more efficient system which gives you more flexibility in automation

      • Kartik 12:56 AM on February 14, 2012 Permalink | Reply

        Yeah, that’s right. I have that in my to do list to learn soon. 🙂 Though, I am not aware if it works for normal desktop systems too.

  • Kartik 12:00 AM on August 19, 2011 Permalink
    Tags: Apache, , , Hosting, Informative, , , ,   

    Configuring Apache for Developing Multiple Websites under Ubuntu Linux 

    Are you a beginner in web development with LAMP environment, or a CS/IT student overwhelmed with setting up LAMP and configuring it for your web-related academic project? If yes, then you are at the right place.

    We have discussed earlier how easy it is to setup a web development environment on your Ubuntu desktop. In this article we will try to have a more customized setup of this environment for easily working with multiple websites. Also find below introduction to some useful Apache utilities and configuration files on Ubuntu/Debian operating system.

    One of the ways to work on multiple sites is to put different directories under /var/www and access them as http://localhost/site1, http://localhost/site2 etc. but there is another and more elegant way. We can access our sites simply by pointing our browsers to http://site1/ and http://site2 etc. To find out how, follow the tutorial.

    Let’s keep all our development websites under the home directory in public_html folder. Apart from being easy to manage your website from inside your home folder, this has the benefit that you no longer need to use sudo to put files in your document root (/var/www earlier).

    Open the Terminal and follow the steps for a test setup, you may customize it your preferences later:

    kartik@PlatiniumLight ~ $ mkdir public_html
    kartik@PlatiniumLight ~ $ cd public_html/
    kartik@PlatiniumLight ~/public_html $ mkdir site1
    kartik@PlatiniumLight ~/public_html $ mkdir site2
    kartik@PlatiniumLight ~/public_html $ 

    Create files called index.html with some content inside each of these directories to help you identify them when you walk through this tutorial:

    kartik@PlatiniumLight ~/public_html $ cd site1
    kartik@PlatiniumLight ~/public_html/site1 $ gedit index.html
    Screenshot - index.html (~/public_html/site1) - gedit

    Screenshot - index.html (~/public_html/site1) - gedit

    Enter some text such as “This is site 1” and save the file. Similarly put “This is site 2” in the index.html of site2 directory.

    kartik@PlatiniumLight ~/public_html/site1 $ cd ../site2
    kartik@PlatiniumLight ~/public_html/site2 $ gedit index.html
    Screenshot - index.html (~/public_html/site2) - gedit

    Screenshot - index.html (~/public_html/site2) - gedit

    Now begins the main step of the process. We have to create separate apache configuration file for each of the websites. This is fairly easy to do. We copy the default config file and modify it to our needs:

    kartik@PlatiniumLight ~/public_html/site2 $ cd /etc/apache2/sites-available/
    kartik@PlatiniumLight /etc/apache2/sites-available $ sudo cp default site1
    kartik@PlatiniumLight /etc/apache2/sites-available $ sudo cp default site2
    kartik@PlatiniumLight /etc/apache2/sites-available $ sudo vim site1

    Note that I am using vim editor for making the changes as I prefer it for its amazing syntax highlight support even for various linux configuration files, you may use gedit, or any other editor instead.

    We need to change the path of the DocumentRoot and Directory to the particular website’s directory, and add a ServerName directive:

    ServerName site1
    DocumentRoot /home/kartik/public_html/site1
    <Directory /home/kartik/public_html/site1/>

    Take notice of the ‘/’ near the end of the second line, it is important. Also don’t forget to add the line with ServerName directive just before the DocumentRoot directive. The lines which need to be edited are highlighted with red arrows in the following screenshot:

    Screenshot - Terminal - Editing site1 config file

    Screenshot - Terminal - Editing site1 config file

    Save the file, similar changes need to be done for other sites also, site2 in this example.

    Now we need to make entries in the hosts file so that Apache can recognize the new sites.

     kartik@PlatiniumLight ~ $ sudo gedit /etc/hosts

    Add the names of the new sites after localhost as shown:

    Screenshot - hosts (/etc) - gedit

    Screenshot - hosts (/etc) - gedit

    Now enable the new websites by using a2ensite command:

    kartik@PlatiniumLight ~ $ sudo a2ensite site1
    Enabling site site1.
    Run '/etc/init.d/apache2 reload' to activate new configuration!
    kartik@PlatiniumLight ~ $ 

    Do similarly for site2, and then run the following command to activate the new configuration:

    kartik@PlatiniumLight ~ $ sudo service apache2 reload
    * Reloading web server config apache2
    apache2: Could not reliably determine the server's fully qualified domain name, using 127.0.1.1 for ServerName
    [ OK ]
    kartik@PlatiniumLight ~ $ 

    You can ignore the warning which is shown in the above output. Now open your favorite browser and open http://site1 and http://site2 in different tabs, you will find the content of their respective index.html files.

    Screenshot - Chromium showing both the sites

    Screenshot - Chromium showing both the sites

    And now you are done with setting up multiple sites in Apache. You can create as many sites as desired by following the same procedure, creating a config file for each website.

    Here follows an introduction to some useful Apache utilities and configuration files which aid you in this kind of a setup:

    a2ensite – As we used this above, you already know what it does – enable a website among those available under /etc/apache2/sites-available directory.

    a2dissite – does exactly the opposite – disable a website so that it is no longer accessible from your web server. E.g.

    kartik@PlatiniumLight ~ $ sudo a2dissite site2
    [sudo] password for kartik:
    Site site2 disabled.
    Run '/etc/init.d/apache2 reload' to activate new configuration!
    kartik@PlatiniumLight ~ $ sudo service apache2 reload
    * Reloading web server config apache2
    apache2: Could not reliably determine the server's fully qualified domain name, using 127.0.1.1 for ServerName
    [ OK ]
    kartik@PlatiniumLight ~ $ 

    This will disable site2 from loading any more. You can use a2ensite to enable it again.

    Similar to sites, there are corresponding directories and commands for controlling apache modules also. Available modules are kept under /etc/apache2/mods-available directory and enabled ones are symlinked under mods-enabled directory. The corresponding commands are – a2enmod and a2dismod. E.g.

    kartik@PlatiniumLight /etc/apache2/conf.d $ sudo a2enmod rewrite
    Enabling module rewrite.
    Run '/etc/init.d/apache2 restart' to activate new configuration!
    kartik@PlatiniumLight /etc/apache2/conf.d $ sudo service apache2 reload
    * Reloading web server config apache2
    apache2: Could not reliably determine the server's fully qualified domain name, using 127.0.1.1 for ServerName
    [ OK ]
    kartik@PlatiniumLight /etc/apache2/conf.d $ 

    This will enable Apache rewrite module which is required my some CMS solutions like Drupal to enable pretty URLs.

    Hope this article was useful to you, feel free to ask your doubts, problems you face, or any similar tips you might have through comments.

    An edited version of this article first appeared at http://www.muktware.com/articles/1876

     
    • Ershad K 10:30 PM on August 19, 2011 Permalink | Reply

      Informative post, nicely written 🙂

    • Abhimanyu M A 11:34 AM on August 22, 2011 Permalink | Reply

      Hey think last time i tried it in Hostel LAN i was able to perform a MiM attack, people trying out http://site1/ got to my site if it resulted in no actual site being there , and [not sure about this] even when the other site took a bit long to respond

      • dhandeep 10:07 PM on August 22, 2011 Permalink | Reply

        exactly.. u need to work a lot for setting up the backend to fake sites and get the whole control of the lan data… 🙂 😉 .. well u can even setup a plugin that rediercts to actual site if that site is not on ur comp.. (this ll eliminate the chance of ppl suspecting u.)

      • Kartik 10:28 PM on August 22, 2011 Permalink | Reply

        It might have been a problem of incorrectly configured DNS entries in the hosts file. Please elaborate your scenario more, I am unable to understand the situation clearly.

    • sorbh_teke 5:28 PM on September 12, 2012 Permalink | Reply

      What configuration need to do in order to make all these sites available through internet ??

      • Kartik 8:07 PM on September 12, 2012 Permalink | Reply

        From Apache’s side everything is ready. Additionally, you need to configure your DNS servers to point these new ‘names’ to your server.

    • Miquel 12:52 AM on September 28, 2012 Permalink | Reply

      Hi Kartik. I know it’s an old post, but I’ve configured my sites the same way as you.
      But now I have a problem with mod_rewrite. One of my sites is a Joomla 2.5 site with SEF and URL rewriting enabled (site1).
      When I pint to http://site1 I get redirected to http://site1/en (its a multilingual site, it’s ok).
      But when I access any other site, for instance http://site2, I get redirected to http://site2/es and the contents of site1 are being shown.
      Whats happening? Can coexist more than one site, one with URL rewriting and another without it?
      Kind regards

      • Kartik 9:45 AM on September 28, 2012 Permalink | Reply

        Interesting.

        Looks like your rewrite rules are in the wrong place. If you put them in .htaccess of site1 (instead of putting in apache’s config file) and access site2, I doubt it will ever redirect again to site1’s contents.

        Let me know, if this solved your problem.

    • miquel 11:38 PM on October 3, 2012 Permalink | Reply

      Thanks Kartik. I reviewed all the process and I have some sites with ServerName and some others without it, and maybe this was the error. Now it works fine, great!

    • edo 8:16 AM on February 8, 2013 Permalink | Reply

      Nice post guy. It’s realy help me. Thanks

    • Bharat 11:32 AM on October 3, 2013 Permalink | Reply

      Hey there Kartik Buddy,

      This requires and addition of a tag

      “ServerName site1”

      in sites-enabled/site1

      To make it perfectly work !

      Rest is cool 😉

      • Kartik 12:35 PM on October 3, 2013 Permalink | Reply

        You are right. But is it absolutely necessary? I have tried this setup without ServerName directive and it works completely fine.

        • Bharat 6:23 PM on October 3, 2013 Permalink

          yeah but in my case (ubuntu 12.04) it keeps on redirecting me to index.html of /var/www withdout this !

        • Kartik 7:45 PM on October 3, 2013 Permalink

          Bharat, I just checked my post again, ServerName directive was always there (with extra warning about including it, see just above the screenshot), just that the screenshot doesn’t contain it. 🙂

  • Kartik 10:11 PM on July 14, 2011 Permalink
    Tags: , , Informative,   

    Remove Orphaned Actions in Drupal 7 

    I recently discovered this weird behaviour by drush when I disabled Comment module in Drupal 7 – drush was giving the following warning whenever I tried enabling/disabling any module:

    WD actions: 3 orphaned actions (comment_publish_action, comment_save_action, comment_unpublish_action) exist in the actions table. Remove orphaned actions

    After some scrounging on drupal.org (where most solutions were meant for Drupal 6), I found the solution at http://blog.devkinetic.com/node/9 Just execute the following once:

    drush php-eval "actions_synchronize(actions_list(), TRUE);"
     
    • Max Nylin 11:53 AM on August 17, 2011 Permalink | Reply

      Thank you for spreading this, just what I was looking for.

      / Max

      • Kartik 2:43 PM on August 17, 2011 Permalink | Reply

        Nice to know you found this useful. 🙂

    • Kyle 8:15 PM on August 19, 2011 Permalink | Reply

      Yea, this bites me every once in a while, and going to the “orphans” URL doesn’t always work. This works great! Thanks.

      • Kartik 8:48 PM on August 19, 2011 Permalink | Reply

        That ‘orphans’ url method is meant for Drupal 6 only.

        Thanks for your comment. 🙂

  • Kartik 12:00 AM on July 9, 2011 Permalink
    Tags: , , , , Informative, Joomla, , , MySQL, , ,   

    Easiest Way to Setup a Web Development Environment on Ubuntu-based Distros 

    Did you know how easy it is to get a basic web development environment on your Ubuntu-based Linux distribution? Guess what, it just takes 2 commands on the Terminal:

    sudo apt-get install tasksel

    This will install a small utility which lets you install a lot packages grouped together as software collections.

    sudo tasksel

    Launch tasksel and select ‘LAMP server‘ by pressing the SPACE key, press ENTER when you are done (see attached screenshot). It will take some time for the required packages to download and install. Near the end of setup, the installer will ask you to create a password for MySQL‘s root user.

    Select LAMP Server among the choices in Tasksel

    Select LAMP Server among the choices in Tasksel

    After the installer finishes, you have the environment ready. Head over to your favorite browser and open http://localhost If everything went fine, the page will say It works!

    It works!

    It works!

    Now you can start creating websites by putting your html, php, etc. files under /var/www directory or just choose to go with CMS solutions like Drupal, WordPress or Joomla.

    The author also recommends to install phpmyadmin package if you happen to work with MySQL databases.

    An edited version of this article first appeared at http://www.muktware.com/articles/08/2011/1348

     
  • Kartik 12:00 AM on June 30, 2011 Permalink
    Tags: Bluetooth, , Data Communications, , , , Informative, , , Smartphone, , , Wi-Fi, Wireless, Wireless LAN,   

    Use rfkill to Enable/Disable Wireless on your Linux Laptop 

    This notebook computer is connected to a wirel...

    This notebook computer is connected to a wireless access point using a PC card wireless card. (Photo credit: Wikipedia)

    Imagine a situation when you have to book an air/train ticket in a jiffy, or check an important mail quickly and the only option you have is a wi-fi connection from either your smart phone or surroundings, and you have only some Linux variant installed on your system. And even after installing all the necessary drivers, you are unable to get the wi-fi on your laptop working? Frustrating right? If yes, then you might want to read on about this useful utility called rfkill which you can keep in handy for those weary situations.

    I own a Dell Studio XPS 1645 and have always found it cumbersome to get the wi-fi working on my system, mainly during those geek/hacker meetups, the only times I have to use wireless Internet. I remember randomly switching wireless on and off through the hardware switch and rebooting my system multiple times in order to get it working. Well, this was the situation until I discovered rfkill – a tool for enabling and disabling wireless devices including Wireless LAN, Bluetooth, etc. Here follows a tutorial on how to use it (fire up the Terminal before proceeding):

    rfkill’s list command lets you see all the available devices, if you don’t find any of your devices, make sure you have turned the hardware switch ON and have installed the drivers for each. Here is what I get on my system after enabling the hardware switch:

    kartik@PlatiniumLight ~ $ rfkill list
    0: brcmwl-0: Wireless LAN
    Soft blocked: yes
    Hard blocked: no
    1: dell-wifi: Wireless LAN
    Soft blocked: yes
    Hard blocked: no
    2: dell-wwan: Wireless WAN
    Soft blocked: yes
    Hard blocked: yes
    3: hci0: Bluetooth
    Soft blocked: no
    Hard blocked: no
    kartik@PlatiniumLight ~ $

    Apart from Bluetooth, I usually find all other devices to be in a random state of yes or no. To enable them, issue the unblock command as shown:

    kartik@PlatiniumLight ~ $ rfkill unblock 0
    kartik@PlatiniumLight ~ $ rfkill unblock 1
    kartik@PlatiniumLight ~ $ rfkill unblock 2
    kartik@PlatiniumLight ~ $ rfkill list
    0: brcmwl-0: Wireless LAN
    Soft blocked: no
    Hard blocked: yes
    1: dell-wifi: Wireless LAN
    Soft blocked: no
    Hard blocked: no
    2: dell-wwan: Wireless WAN
    Soft blocked: no
    Hard blocked: no
    3: hci0: Bluetooth
    Soft blocked: no
    Hard blocked: no
    kartik@PlatiniumLight ~ $

    You can also try using the unblock all command for enabling all the devices together:

    kartik@PlatiniumLight ~ $ rfkill unblock all

    Sometimes it happens that even after unblocking once, some device(s) may show up as blocked (see the 0th device above, which shows hard blocked as yes). To correct this just issue the unblock command again for that particular device:

    kartik@PlatiniumLight ~ $ rfkill unblock 0
    kartik@PlatiniumLight ~ $ rfkill list
    0: brcmwl-0: Wireless LAN
    Soft blocked: no
    Hard blocked: no
    1: dell-wifi: Wireless LAN
    Soft blocked: no
    Hard blocked: no
    2: dell-wwan: Wireless WAN
    Soft blocked: no
    Hard blocked: no
    3: hci0: Bluetooth
    Soft blocked: no
    Hard blocked: no
    kartik@PlatiniumLight ~ $

    When you get all the devices unblocked, you will not face any trouble connecting to wi-fi devices around. 🙂

    Bonus Tip: If you have a common hardware switch for wireless radios, you can turn off additional devices like Bluetooth (or vice versa) to save some battery life using the block command of rfkill:

    kartik@PlatiniumLight ~ $ rfkill block 3
    kartik@PlatiniumLight ~ $ rfkill list
    0: dell-wifi: Wireless LAN
    Soft blocked: no
    Hard blocked: no
    1: dell-wwan: Wireless WAN
    Soft blocked: no
    Hard blocked: no
    2: brcmwl-0: Wireless LAN
    Soft blocked: no
    Hard blocked: no
    3: hci0: Bluetooth
    Soft blocked: yes
    Hard blocked: no
    kartik@PlatiniumLight ~ $

    Stay Dignified!

    • Kartik

    Originally published at http://www.digimantra.com/linux/rfkill-enabledisable-wireless-linux-laptop/

     
c
Compose new post
j
Next post/Next comment
k
Previous post/Previous comment
r
Reply
e
Edit
o
Show/Hide comments
t
Go to top
l
Go to login
h
Show/Hide help
shift + esc
Cancel