Recommendations to TRAI on Transparent Broadband Services

My Recommendations to TRAI on the Draft Direction on delivering Broadband Services in a Transparent Manner

1. In my opinion, 512 kbps as bare minimum broadband speed by itself is regressive in nature and 64 kbps sought by Airtel means to take our nation’s IT Infrastructure Standards to Stone Age and I request a minimum mandated speed of 2 Mbps to be set as broadband speed which, though does not lets us compete with the developed nations but at least put us in par with the developing nations.

2. Please add clause (f) to the consultation paper mandating to qualitatively quantify the Quality of Service currently ignored by ISPs which results in low throughput, high latency, jitters and dropped packets – As a consumer who pays for a service, I expect a standard in the service rendered by the service provider and today there is no standardized measuring mechanism or process for reparation when the set standard is not met by the Service Provider.

3. Service Providers are to be transparent about the minimum committed speed on 2G/3G/4G Stacks. It is important to note the misuse of the term “up to” by ISPs, whereas there is no minimum QoS ever mentioned by the Service Providers. To give you an example, I have never seen the practical transfer speed of 500 kbps on EDGE let alone the theoretical possibility of 1 Mbps and it is the same case with both HSPA and HSPA+, not even 50% of the theoretical transfer speed of 14.4 and 16Mbps respectively has been experienced.

4. Complaints by users are generally treated as individual incidents and without much of importance given in resolving the issue. A cumulative report of customer complaints are to be published by the Service Providers month over month and the Plan of Action for mitigating and improvement of service to be shared as well – After all, this is a bare minimum expectation as they are answerable to the consumers who pay to avail the service. This should serve as a Key Performance Indicator for the ISP region wise and consumers can take informed decision basis the same.

FaceBook is executing a well crafted DDoS on the State

FaceBook is executing a well crafted DDoS on the State in an attempt to kill Net Neutrality and promote it’s agenda on Free Basics.

It is indeed such a measure that FaceBook has stooped low towards to ensure it’s interest on Free Basics is protected from any possible outcome that might be of the decision by the Telecom Regulatory Authority of India.

TRAI for the second time came up with another Consultation Paper on Differential Pricing on Data Services and is seeking comments from the public to shape the future of Net Neutrality in India.

The first time, FaceBook came up with online commercials that were aimed at the sentiment that everyone should get internet.

The catch is, everyone gets internet, the internet FaceBook wants them to see and know and not the flat world in it’s truest existence.

This time around FaceBook resorted to Full Page Advertisements in Print Media, Bill Boards and that aside, resorted to misleading people in to submitting a drafted opinion supporting Free Basics from their own platform.

The trick here is, people don’t even know that they are supporting such a closed system but just by merely scrolling the page made them support FaceBook’s agenda.

Worst part is, people reporting that their dead relatives have supported FaceBook’s Free Basics, now beat that!

To top it, FaceBook runs full fledged advertising campaigns on it’s own platform to promote their Free Basics agenda, not just in India but outside India as well.

By this, FaceBook is misleading people and turning them in to a huge botnet against TRAI and the People of India.

Considering the pseudo monopoly status that FaceBook has and the mass psychology tactics it is deploying, the amount of ill informed submissions that have happened even without the user’s consent skews the data that TRAI is receiving. And this in my opinion is a well crafted DDoS attack executed by FaceBook on the State.

Now imagine, this skewed data is going to be taken for consideration during a decision making process on India’s Internet Freedom.

Here is a representation of how a DDoS looks like – Thanks to @r0h1n

You can submit a drafted response or edit the draft before sending to TRAI from Save The Internet – This is your bit in protecting Internet Freedom in India.

Let’s Encrypt Free SSL Certificate and Nginx on Ubuntu

Let's Encrypt Free SSL Certificate

Encryption For The Masses by Let’s Encrypt

Let’s Encrypt brings Open, Free SSL Certificate to make encryption possible for the masses.

In case you missed the chatter, Let’s Encrypt is a new Certificate Authority providing Free SSL Certificate by automated process and best of all, Open.

Which means, you don’t need to pay for a Certificate for your Site to get a Certificate issued, you can use the Free SSL Certificate provided by Let’s Encrypt.

If you can’t wait till December 3, 2015 – by then they will be open for their Public Beta go ahead and Sign Up for their Limited Closed Beta.

I signed up two weeks ago, got my Closed Beta invite whitelisting few domains, two days ago and decided to play with it today.

Their client makes it pretty easy for one to get the Certificate and in few minutes you have your server up and running with SSL – Following these steps might make it much easier.

Installing Git

If you don’t have git installed already, you’d need it.

sudo apt-get update
sudo apt-get install git

That should get git installed in your server and before you begin, stop nginx – This would throw errors as the it would prevent from binding to port 80

Stopping Nginx

sudo service nginx stop

Downloading Let’s Encrypt Client

Now let’s move towards some real action by downloading the Let’s Encrypt Client

git clone
cd letsencrypt

Generating Let’s Encrypt Free SSL Certificate

The nifty letsencrypt-auto tool is used to update and manage dependencies in a Python Virtual Environment. You can run the command as a normal user and it will prompt when there is a need for su permissions.

./letsencrypt-auto certonly -a standalone –server –agree-dev-preview

Running this would set up the environment and install dependencies and you’d be asked for your email address for notifications and Key Recovery.

It would next ask you read the Terms of Service and you’d have to Accept to proceed.

Next you’d be prompted to input your domains that were whitelisted and emailed to you. You can separate them using commas or spaces. Alternatively, you could specify them in the command by using the -d your_domain.tld – That’d look like this:

./letsencrypt-auto certonly -a standalone -d your_domain.tld –server –agree-dev-preview

After a bit, it would give you the following output:

Updating letsencrypt and virtual environment dependencies…….
Running with virtualenv: sudo /home/user/.local/share/letsencrypt/bin/letsencrypt certonly -a standalone -d your_domain.tld –server –agree-dev-preview

It’ll also tell you where the Certificate and Chain have been saved – By default that is at  /etc/letsencrypt/live/your_domain.tld/fullchain.pem

Using a Strong Diffie Hellman Group

You’ll have to generate a new Diffie-Hellman Group by using the following commands:

cd /etc/nginx
openssl dhparam -out dhparams.pem 2048

That should take about 2-3 minutes to generate a Strong DH Group and you’ll find dhparams.pem in the path /etc/nginx/

Using Mozilla’s SSL Configuration Generator for Nginx

You can use Mozilla’s SSL Configuration Generator to generate nginx Server Block configuration.

Here is the config for updating your nginx.conf or vhost configuration file for nginx:

server {
listen *:80;
listen *:443 ssl spdy;
ssl_certificate /etc/letsencrypt/live/your_domain.tld/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/your_domain.tld/privkey.pem;
ssl_session_timeout 1d;
ssl_session_cache shared:SSL:10m;
ssl_session_tickets off;

# Diffie-Hellman parameter for DHE ciphersuites, recommended 2048 bits – Remember we generated this earlier
ssl_dhparam /etc/nginx/dhparams.pem;

# modern configuration. tweak to your needs.
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;


ssl_prefer_server_ciphers on;

# HSTS (ngx_http_headers_module is required) (15768000 seconds = 6 months)
add_header Strict-Transport-Security max-age=15768000;

# OCSP Stapling —
# fetch OCSP records from URL in ssl_certificate and cache them
ssl on;
ssl_stapling on;
ssl_stapling_verify on;

## verify chain of trust of OCSP response using Root CA and Intermediate certs

ssl_trusted_certificate /etc/letsencrypt/live/your_domain.tld/chain.pem;

#DNS Resolver Configuration
resolver valid=86400;
resolver_timeout 10;

#Rest of your server configuration…


Starting Nginx

Once nginx or vhost configuration file has been updated, go ahead and start nginx by issuing the following command:

service nginx start

Testing your site loading with brand new Let’s Encrypt Free SSL Certificate

Now head to your browser and open your site and point to your https://your_domain.tld and it will be opening as a Secured Site with a valid certificate.

Using SSL Lab’s SSL Server Test

Finally, for feel good factor head to SSL Lab’s SSL Server Test and enter your domain name, submit and wait for it to give you A+ Rating.

Install Node.js 5.0 Stable on Ubuntu 14.04 LTS

Node.js 5.0 is a Stable version released in October 2015, right after Node.js 4.2 Argon LTS was released.

We’ll look about how to install Node.js 5.0 stable on Ubuntu 14.04 LTS.

Get the setup script:

curl -sL | sudo -E bash

Once done, start installation of Node.js by executing the following command:

sudo apt-get install -y nodejs

That’s all folks, you should now have Node.js 5.0 Stable installed and ready to go!

You can test the installation with the following command to find the version:

node -v

You should get an output as:


Enjoy hacking with node.js

Failing is eventual. To be a failure is optional.

From as early as the beginning of human history, the generalization, objectification of failing and the associated dogma has only preconditioned our thought process to see failing in a negative connotation, as an endpoint with no return.

However, what we fail to see (pun) is that, the base for every invention, discovery is something or someone failing, but didn’t fail in the entirety.

Failure and Failing. It starts with the question, “How surprised were you when you failed?” Or “How surprised are you going to be when you fail?”

Some of the variables that can be factored towards failure are, time, cost, effort, motivation, people and most importantly, planning and expectations.

Its not to say that everything will work as per plans, but about expecting failures.

Setting expectations to self that the whole deal of “like clock work” is a little superficial and even clocks fail.

Being overtly cautious, loaded with multiple layers of contingency plans, might make one a paranoid.

The idea is not to just be paranoid, but rather knowing and expecting, failures are part of the process and kicking in the fall back mechanism to handle the failure effectively.

While failures are part of the process, failing is, not expecting failure, giving up and not finding another hack.

A Perfectionist’s Dilemma of Lean

Often one gets caught in dilemmas. While the state of confusion by itself is a good starting point, stagnation in that state isn’t.
While starting up, at times, a perfectionist gets absorbed in to the vortex of making everything perfect and tends to mis-understand or fails to see starting up lean with a perspective that would yield quicker results than the usual time consuming methods of developing a full fledged perfect product / service that has the risk of getting shot down by the prospective customer.
Lean doesn’t advocate shoddiness or imperfection, rather lean advocates cutting down the vanity and getting to the crux in the minimum possible time, of course a perfect one at that stage. Lean isn’t about working based on mere vague assumptions but about making those assumptions using science.
It’s imperative to understand that it doesn’t require to have the whole product / service in a perfect state to reach out to your early adopters, only because, the perfect state is a state of constant evolution with shorter cycle time. Instead, a perfect Minimum Viable Product that gives the crux of the idea, ready to take cues from the early adopters to shape it according to the market needs by adding features and building on it.
I’d like to touch upon the science behind the decisions without it being vague assumptions in my next post along with Pigeonhole Principle.
Please let me know your thoughts around this, will be happy to connect and learn your perspectives.

Artificial Intelligence

The very thing that’s the reason behind a fruit or a vegetable taking a shape it takes is intelligence. Who created that intelligence, where is it coming from and can it be replicated or manipulated are some key questions we constantly seek answers for.

Evolution states that everything evolves, from one state to another. Artificial Intelligence though created artificially, will develop the capabilities to evolve as well, apparently that’s what we are trying to get at.

Now the whole deal of concern is about human being made Artificial Intelligence gaining it’s own conscience. In theory and for all practical purposes, it is equivalent to being concerned about the child that we bring in to this world getting it’s own conscience.

What’s your thought on your child coming of age and developing his/her conscience?

Time Space Conundrum

Memories of the misunderstood past,
way too many and way too vast;

Figments of our imagination,
broken in parts only for realization;

Here we are, not watching us grow old together,
sans knowing the reason for our cold shoulder;

All the time we took to assimilate,
too little for us to inculcate;

Time and space good enough,
only for us to get even more tough.

Stars, Diamonds, Lightnings

In her eyes he saw them stars, shining bright, day and night.
Kisses parted, kisses educated, his learning was kinesthetic.
Broken smiles, beautiful eyes, only for him to read.
Special kid she ought to be, to make him even feel special.
Tougher than the diamonds, hotter than lightnings.
She cuts him, she melts him, bolstering that she is.

WordPress Permalinks Issue Solved

Almost after Twenty hours of nerve wrecking and hair splitting search, finally got the much irritating WordPress Permalinks Issue solved for my blog.

Yes, the one you are reading right now.

There are some standard procedures you might not want to skip.

First, check if mod_rewrite is enabled in your Apache config and to enable mod_rewrite in Apache if not already, type the following in your terminal

sudo a2enmod rewrite

It should enable or tell you “Module rewrite already enabled”

Now that mod_rewrite has been sorted, lets get to telling Apache that its alright if the .htaccess file overrides some server level settings.

Look for “AllowOverride none” in your 000-default.conf, httpd.conf, httpd-vhosts.conf and change it to “AllowOverride all” – It is important that you check in all these files as it took me several hours of searching to figure that out as primarily I was tinkering with my httpd-vhosts.conf and httpd.conf and didn’t pay attention to the 000-default.conf file.

Once you’ve made the change, restart Apache using the following command

service apache2 restart

Now, ensure there is a .htaccess file in your WordPress install directory.

If its not there, then create an empty one using the following command in your terminal

touch .htaccess && chmod 666 .htaccess

Don’t worry yet about adding anything in the .htaccess file. WordPress will handle it for you.

Now go to your WordPress Settings and set your Permalinks to whatever options given there pleases you and save the settings. This should generate the required content for the .htaccess file automatically.

You can visit your site and check if the fancy Permalinks are working and then change the permission for the .htaccess file to 644.

That should set you right with your WordPress Permalinks Issue.