Preslav Rachev

Thoughts and Ramblings.

What You May Have Missed

“What You May Have Missed” is my weekly recap of the tech/startup sphere. The following topics were relevant during the past week (March 27th - April 3rd, 2016)

Microsoft @ BUILD 2016: “We Love Bash!”

One of the highlights during the keynote of Microsoft’s annual BUILD 2016 developer conference, was the integration of Bash within Windows. Wait, what?!? Yes, you heard right, Microsoft is partnering with Canonical, the company behind Ubuntu, to bring the Linux command-line experience under Windows. And when I say, real, I really mean it. This is not a virtual machine or a port of the Bash utilities. In fact, this is a genuine image of Ubuntu running natively on a Windows machine. Had I not switched to using a Mac (or directly using Linux, when possible), for precisely the same reason some 5 years ago. I’d be jumping to the sky right now.

Traditionally, the console under Windows has always felt limited, compared to its UNIX-based counterparts. The closest to emulating the Bash experience has been using a port, a.k.a Cygwin, or running a lightweight Linux distribution on a VM, right within Windows. Both have developers gain some level of power, but at the cost of much incompatibility with the rest of the Windows ecosystem.

All in all, I am excited to see Microsoft making strides to embrace the developer community (and not just .NET developers) again. This is something, I wouldn’t be able to imagine happening under Steve Ballmer, or even Bill Gates for that matter. Thanks, Satya Nadella!

Read more:

Tesla unveiled its long-awaited Model 3

Tesla’s long awaited Model 3 has finally been revealed, and it looks gorgeous, albeit, a little weird at the front. The car will arrive in 2017, and will supposedly sell for $35,000. This is a pivotal moment for Tesla, as it is expected to bring its lineup to the broader market. The initial reception has so far been more than positive: just a couple of days after its official unveiling, Tesla CEO, Elon Musk disclosed that preorders have more than doubled the expected numbers.

Aside from all facts, perhaps, what makes the look of the new Model 3 so enigmatic, is its grille-less front:

This does not mean at all that the car is ugly. On the contrary, the entire look is clean and elegant, though, a little jarring at the first encounter. The bold move from Tesla signals the end of combustion engine domination, and the beginning of a new, cleaner world. We can only wait and hope to see it come true.

Read more:

Jalopnik’s verdict:

The Model 3 is sleek and clean and quite modern-looking. The short overhangs make it a bit less languidly elegant than a Model S, but the proportions are still quite good. The side window line kicks up pleasingly at the rear, without feeling too forced, and the greenhouse’s overall taper I suspect has aero benefits, but also gives the car a leaner overall look, and forms the ‘shoulders’ or ‘haunches’ at the rear that suggest a bit of animal-like musculature, a hint at the rear-wheel drive nature of the car.

The Tesla Model 3 Boldly Kills The Front Grille

The most anticipated new car of this evening and possibly this year, the Tesla Model 3, is finally here. We've seen it. And, almost immediately upon seeing it, we're judging it. Here's my design breakdown based on what we've seen so far. http://jalopnik.com/tesla-model-3-... Here we are.

In case, you haven’t watched it, check out Model 3’s official unveiling:

… and Elon Musk’s master plan in his own words:

The Secret Tesla Motors Master Plan (just between you and me)

Background: My day job is running a space transportation company called SpaceX, but on the side I am the chairman of Tesla Motors and help formulate the business and product strategy with Martin and the rest of the team. I have also been Tesla Motor's primary funding source from when the company was just three people and a business plan.

What You May Have Missed

“What You May Have Missed” is my weekly recap of the tech/startup sphere. The following topics were relevant during the past week (March 20 - March 27, 2016)

Kik, npm, and the disruption in Nodeland

TL;DR: A developer got asked to unpublish one of his modules, hosted on npm under the name Kik, due to a naming dispute with the popular chat app. When he declined to do so, npm execs agreed to change the ownership of the module without his permission. As a result, the developer unpublished all of his modules from npm in a single move. One of those modules - left-pad turned out to be used by thousands of other modules. Its removal from npm caused a global disturbance across all dependents on the npm ecosystem.

Though the problem was fixed a few hours later, it spurred a discussion across the community, about module granularity, and the level of project dependence on 3rd party code. Dependencies are essential building blocks in modern-day software development, yet many asked the question: “Do we have to rely on a module, which is just a couple of lines large, or we’d better write those lines ourselves?” Some even crosed the line and asked if we have forgotten how to program.

Being both a Java and a JavaScript developer, I can understand the concerns of all sides. Java has a strong standard library, dictated by a relatively limited group of individuals. Also, for a long time since its early days, the Java ecosystem lacked strong dependency management tooling, Thus, there have been fewer 3rd-party libraries out there, each of which has tried to pack as much as possible into a single JAR file.

JavaScript on the other hand, offers little to no standard library functionality. This makes the language easily portable, and lets the syntax evolve in a fast manner. Also, when NodeJS was released, the idea of automated dependency had already become a standard in software development. Combine those two arguments, and you’ll get it why writing and using micro-modules is the norm and not an exception in the JS world. As a firm believer in both OSS, and DRY, I still think that it is better to incoprorate community code, even if this code might seem trival. By doing so, you guarantee (for the most part) that this code will always be up-to-date, as language syntax and features evolve.

Follows, the story in links:

I've Just Liberated My Modules

Note: Thank you for all the support ❤ A few weeks ago a patent lawyer sent me an e-mail asking me to unpublish "kik" module from NPM. My answer was "no" and he replied me saying "I don't wanna be dick about it, but "kik" is our registered brand and our lawyers gonna be banging on your door, and taking down your accounts."

How one developer just broke Node, Babel and thousands of projects in 11 lines of JavaScript

Updated Programmers were left staring at broken builds and failed installations on Tuesday after someone toppled the Jenga tower of JavaScript. A couple of hours ago, Azer Koçulu unpublished more than 250 of his modules from NPM, which is a popular package manager used by JavaScript projects to install dependencies.

kik, left-pad, and npm

Earlier this week, many npm users suffered a disruption when a package that many projects depend on - directly or indirectly - was unpublished by its author, as part of a dispute over a package name.

A discussion about the breaking of the Internet

Hey everyone - I'm the head of messenger at Kik. I wish this didn't have to be my first post on Medium, but open source is something that I care about. I've published a few meager open source projects in the past, things that aren't groundbreaking but that I thought might be useful to other people, and I rely on countless others every day.

What if we had a great standard library in JavaScript?

Guess what? In the real world, this model quickly leads to a multitude of issues. The most obvious example is that if a package is removed for any reason, chaos quickly spreads. That's actually not the main problem though. As a developer of software, your job is it to find the simplest solution to a problem.

Microsoft’s chat bot goes awry. Who’s to blame?

Shortly after posting its A.I. chat bot, named “Tay” online, Microsoft became the center of a growing online scandal. The reason: just a couple of days after the launch, the bot began posting tweets with clearly racist, mysoginist, sexist, you name it, content. A functionality, intended to get used for learning purposes, quickly got raided by online trolls, which “taught” the machine that hate towards others is actually a good thing. As a result, Tay had to get much of its content deleted, and its behavior “tamed”.

Many were quick to share an opinion that machines of this sort always need a sort of filtering mechanism, to protect them from learning all the wrong things the. Yet, they forgot to name where those machines' ideas come from - us. Just like our kids, learning machines start their lives with little to no knwoledge about their surrounding environment. Both pick up the basics from interactions with their parents and trainers, respectively. Therefore, if a “Terminator”-like scenario ever happens, we should not blame it on machines, but on ourselves. Or, we could start preparing for a brighter future from today, by simply being better people.

Follows, the story in links:

Microsoft is deleting its AI chatbot's incredibly racist tweets

Bundesarchiv Microsoft's new AI chatbot went off the rails on Wednesday, posting a deluge of incredibly racist messages in response to questions. The tech company introduced "Tay" this week - a bot that responds to users' queries and emulates the casual, jokey speech patterns of a stereotypical millennial.

Learning from Tay's introduction - The Official Microsoft Blog

As many of you know by now, on Wednesday we launched a chatbot called Tay. We are deeply sorry for the unintended offensive and hurtful tweets from Tay, which do not represent who we are or what we stand for, nor how we designed Tay.

Tay the racist Twitter bot should make us fear human nature, not A.I.

by Brian Fung Let me put it plainly. Despite what you may hear, Microsoft's racist, Hitler-loving A.I. is not how the robot uprising begins. You might have seen some reports by now about Tay, a bot designed to sound like a teenager on the Internet and to learn from her interactions with other people.

Injecting Services in Angular 2 and Ionic 2

Create a service:

1
2
3
4
5
6
7
8
9
10
11
12
13
import {Injectable} from 'angular2/core';

@Injectable()
export class MyService {

    constructor(http: Http) {
        // initalize things
    }

    doSomething() {
        // do something
    }
}

Injecting the newly created service per component

In vanilla Angular 2.0, as well as in Ionic 2 applications, you need to import the service class and add it to the list of providers on the @Component / @Page metadata. This will create a new instance of the injected class per component.

1
2
3
4
5
6
7
8
9
10
11
12
import {Page} from 'ionic-angular';
import {MyService} from '../../services/my-service';

@Page({
  templateUrl: 'build/pages/home/home.html',
  providers: [MyService]
})
export class HomePage {
    constructor(mediumService: MediumService) {
        // The newly instantiated service will be provided here
    }
}

Injecting a singleton instance of our new service

Coming from version 1.0, Angular offers a bootstrap function, which helps manually set up an application. In Angular 1.0, the bootstrap function is usually used for starting an application without explicitly placing ng-app on a DOM element, i.e. starting the application at a later point in time.

In Angular 2.0, calling bootstrap is the default way of setting up an application. When creating a component that will serve as the main starting point of your application, simply add the component as the first parameter of the bootstrap function. Here is modified example, based on the official Angular 2.0 docs:

1
2
3
4
5
6
7
8
9
10
11
12
import {MyService} from '../../services/my-service';

@Component({
  selector: 'my-app', template: 'Hello !'
})
class MyApp {
  name: string = 'World';
}

function main() {
  return bootstrap(MyApp, [MyService /* and other dependencies, you want to instantiate globally */]);
}

Ionic 2 does not explicitly work with bootstrap, but requres at least one component, decorated with an @App decorator To inject a dependency, which should be avaiable as a singleton to all components across the application, add the dependency to the list of providers in your main app component, and simply do not add it to any provider list anywhere else. If you do so, the single dependency instance will be provided as an optional constructor paramater to every component. Here is an example from a sample Ionic 2 app:

1
2
3
4
5
6
7
@App({
  templateUrl: 'build/app.html',
  providers: [ConferenceData, UserData]
})
class ConferenceApp {
  // ...
}

And the ConferenceDatainstance will be available in every other component, without an explicit declaration:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
import {ConferenceData} from '../../providers/conference-data';

@Page({
  templateUrl: 'build/pages/schedule/schedule.html'
})
export class SchedulePage {
  static get parameters() {
      return [[IonicApp], [NavController], [ConferenceData], [UserData]];
  }

  constructor(confData: ConferenceData) {

      // confData will the single instance, declared into your ConferenceApp
  
      this.confData = confData;
      //...

  }

Tip: Add a Custom Request Interceptor to Your Retrofit-2.0-Based API

Often, it is useful to be able to modifiy all of your APIs requests, without explicitly modifying each and every request’s definition. A typical example is an authentication token parameter that needs to be present in every request.

Retrofit used to provide its own RequestInterceptor support. With the advent of Retrofit 2.0, though, this functionality has been removed, in favor of using OkHttp’s exisitng interceptors. As of version 2.0, Retrofit uses 100% of OkHttp’s magic under the hood. In fact, creating a new Retrofit instance in your code, implicitly instantiates an OkHttpClient to serve the requests, and deliver the responses. The way to add your request interceptors, is to instantiate your own OkHttpClient instance, add the request interceptor to it, and provide it to Retrofit’s builder, before the Retrofit instance gets built.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
final OkHttpClient okHttpClient = new OkHttpClient();
okHttpClient.interceptors().add(new Interceptor() {
    @Override
    public Response intercept(Chain chain) throws IOException {

        final HttpUrl modifiedUrl = chain.request()
                .httpUrl().newBuilder()
                  // Provide your custom parameter here
                    .addQueryParameter("token", token)
                    .build();

        final Request request = chain.request().newBuilder().url(modifiedUrl).build();

        final Response response = chain.proceed(request);
        return response;
    }
});

final Retrofit retrofit = new Retrofit.Builder()
            .baseUrl("<BASE_URL>")
            .client(okHttpClient)
            .build();

Tip: Upgrade the Android Gradle Plugin for Android Studio 2.0

When working with Android Studio 2.0, Gradle may occasionally decide to stop building your application, and throw you the following error instead: Plugin is too old, please update to a more recent version, or set ANDROID_DAILY_OVERRIDE environment variable to “XXXX”.

This is due to the fact that AS 2.0 uses an alpha version of the Android plugin for Gradle. As new versions of the plugin get released, so may older ones become incompatible. Therefore, it is recommended that one changes the version of the Android plugin manually, and tests if everything still works as it should.

How to solve the problem

One possibility is to simply follow the instructions and set an environment variable with the specified value. On OSX this looks as follows:

1
launchctl setenv ANDROID_DAILY_OVERRIDE <your-value-on-error-message>

Afterwards, clean the project, restart AS, and build again.

If you want to have precise control over the version of the Android plugin for Gradle, which AS is currently using, open the base build.gradle file and locate the classpath to the Android plugin. In my case, this was:

1
classpath 'com.android.tools.build:gradle:2.0.0-alpha1'

Check Bintray for the latest version of the plugin (2.0.0.-alpha3 at the time of this writing) and update correspondingly. Do not forget to sync your Gradle changes, clean, and rebuild the project, in order to to avoid issues with temporary files or cached data.

Merging Multiple Maps Using Java 8 Streams

Often, we’re faced with situations, in which we have to merge multiple Map instances into a single one, and guarantee that key duplicates are handled properly. In most imperative programming languages, including Java, this is a trivial problem. With a few variables to store state, and a couple of nested loops, and several if-else statements, even people new to Java could program a solution within minutes. Yet, is this the best way to do it? Though guaranteed to work, such solutions could easily get out of hand and become incomprehensible at a later point of development.

Java 8 brought with itself the concept of streams, and opened the door of possibilities for solving such problems in a declarative functional manner. Moreover, it reduced the need of having to store intermediate state in variables, eliminating the possibility of some other code corrupting that state at runtime.

The Problem

Suppose, we have two maps visitCounts1, visitCounts2 : Map<Long, Integer> where the maps represent the results of different search queries. The key of each map is the ID of a certain user, and the value is the number of user visits to given part of the system. We want to merge those two maps using streams, such that where the same key (user ID) appears in both, we want to take the sum (total) of user visits across all parts of the system.

The Solution

First, we need to combine all maps into a unified Stream. There are multiple ways of doing this but my preferred one is using Stream.concat() and passing the entry sets of all maps as parameters:

1
Stream.concat(visitCounts1.entrySet().stream(), visitCounts2.entrySet().stream());

Then, comes the collecting part. Collecting in Java 8 streams is a final operation. It takes a given stream, applies all transformations (mostly from map, filter, etc) and outputs an instance of a common Java colelction type: List, Set, Map, etc. Most common collectors reside in the java.utils.stream.Collectors factory class. I will use Collectors.toMap() for my purposes.

The default implementation of Collectors.toMap() takes two lambda parameters:

1
public static <T,K,U> Collector<T,?,MapCollector<K,U>> toMap(FunctionMapCollector<? super T,? extends K> keyMapper, FunctionFunctionMapCollector<? super T,? extends U> valueMapper);

Upon iterating over the stream, both lambda parameters get called and passed the current stream entry as an input parameter. The first lambda is supposed to extract and return a key, whereas the second lambda is supposed to extract and return a value from the same entry. This key-value pair would then serve for creating a new entry in the final map.

Combining the first two points, our resulting Map instance would so far look like this:

1
2
3
4
5
6
Map<Long, Integer> totalVisitCounts = Stream.concat(visitCounts1.entrySet().stream(), visitCounts2.entrySet().stream())
  .collect(Collectors.toMap(
      entry -> entry.getKey(), // The key
      entry -> entry.getValue() // The value
  )
);

What happens here is rather straightforward. The collector would use the keys and values of the existing maps to create entries in the resulting map. Of course, trying to merge maps with duplicate keys will result in an exception.

A little known version of the same method accepts athird lambda parameter, known as the “merger”. This lambda function will be called every time duplicate keys are detected. The two possible values are passed as parameters, and it is left to the logic in the function, to decide what the ultimate value will be. This third lambda makes solving our problem easy, and in a very elegant manner:

1
2
3
4
5
6
7
8
Map<Long, Author> totalVisitCounts = Stream.concat(visitCounts1.entrySet().stream(), visitCounts2.entrySet().stream())
  .collect(Collectors.toMap(
      entry -> entry.getKey(), // The key
      entry -> entry.getValue(), // The value
      // The "merger"
      (visitCounts1, visitCounts2) -> visitCounts1 + visitCounts2
  )
);

Or simply, using a method reference:

1
2
3
4
5
6
7
8
Map<Long, Author> totalVisitCounts = Stream.concat(visitCounts1.entrySet().stream(), visitCounts2.entrySet().stream())
  .collect(Collectors.toMap(
      entry -> entry.getKey(), // The key
      entry -> entry.getValue(), // The value
      // The "merger" as a method reference
      Integer::sum
  )
);

Evernote, Dead Unicorns and Card Houses

Photo Credit: Peter Roberts @ Flickr Photo Credit: Peter Roberts @ Flickr

I got my Business degree about five years ago. It had been some four tough years, juggling between two majors (Business and Computer Science). I came out of college with little life experience, but lots of principles in my head. At the time, I thought I knew what “building a business” meant, and I wanted in. I wanted to make a tech startup, build great products, and make my customers happy. Because, that’s what a business does, right? It profits by building valuable products that make customers happy. Right?

Nope. Rather, the majority of your time building a startup nowadays, involves convincing a small group of people (investors) that your company is worth way more than it is. That’s pretty much there is to it. Fine-tuning an image, by skipping business development altogether, and solely focusing on growth instead. The more users your product/service has, and the faster you get new ones, the bigger the valuation of your company gets.

What about real customers and steady revenues, you ask? Who needs them? In fact, having a steady revenue flow makes investors back off. Apparently, it serves as a signal that your business is going to stop growing. Since growth is the most important thing in the Startup Universe, focusing on building real customers relations automatically puts your business in the category “yesterday news”. Crazy, isn’t it?

Chasing the Unicorn

Silicon Valley has made startups a disfavor, by coining the term “unicorn”. Shortly after the 2008 financial collapse, many investors showed their backs on companies like Uber, AirBnb, Snapchat, Whatsapp, Spotify, etc. All examples proved their early rejectors wrong, and went on to become multi-billion-dollar successes. Realizing the opportunity they missed, investors have been on the hunt ever since, chasing every potential opportunity to make up for their mistake. They have been on the hunt for the next “unicorn”. In their effort not to miss another AirBnb or Uber, investors have lowered the barrier to entry, to just about any idea that promises to become an overnight success. How the idea turns into a viable business is of second importance, what matters is having it in your portfolio, before your rivals jump on board.

I’ve been told several times that there has never been a better time to obtain venture capital. No wonder. I can start building just another service that does “Uber for X”, and jump on the train. Yet, try to build something meaningful, something with a clear goal and a carefully laid out business plan, and you are quickly shown the door. There’s only so much growth you can promise in a consumer-driven business.

Besides the fact that it looks like another bubble, I find all of this kind of saddening.

Evernote, Dead Unicorn?

Tech media love creating drama and stirring the waters around companies. A company goes out of the celebrity radar for a while, or decides to make a management shift, and suddenly, it gets pronounced dead prematurely. For media, it’s all or nothing, your business is either a “unicorn”, or a “dead unicorn”. For a while, there have been speculations which the first “dead unicorns” will be. Just in time, stories about Evernote falling off the list started appearing.

Clearly, controversial titles bring in fresh ad revenues, but they work at a disservice to both readers and businesses. Calling a company a “dead unicorn” sounds as if it is going to close shop any time soon. Not even close. I felt the need to bring in my sense of sanity to this discussion.

A “dead unicorn” means simply that a company has gone past its peak potential to grow exponentially. This is absolutely normal after all. One who has written a couple of business plans knows that an important part of the plan itself is the estimate of maximum target audience. Whether you produce digital goods or not, at the end of the day, there is a maximum numbers of users your business could ever reach. Even if it were theoretically possible to target each and every individual on the planet, one’s business would eventually become dependent on the rate of population growth. And believe me, most businesses greatly overestimate their target audience, considering one-time bystanders, part of their potential user base.

Of course, it is possible to acquire new target audiences by introducing additional features, acquire key companies, or enter strategic partnerships. Yet, as with other things in life, one comes at the cost of another. Stretching too far out, leaves a business with an even shallower market proposition, leading to new users willing to invest less of their attention on the product/service. Ultimately, in one way or another, things come back to square one.

So, in a way, getting out of the list of the blessed, means that it’s time for the business to mature and start functioning like … well, like a real business does. That’s right, it is time to make the masses start paying for their lunch. Yet, it is one thing to gradually start doing so as the business grows, and quite another, when the business has reached the point of no return. Giving too much for free, can be as detrimental to the business model at a later stage in life, as giving too little right at the start.

Evernote knows this quite well. A valuable product, known for its large ratio of free users, vs. paying customers. The reason: for years, Evernote has been focusing on growth, providing free users with so many features that it has made no sense to switch to a paid plan. I have been a long-time user of Evernote, and used to pay for Premium too. Yet, I have always done it more because of a sense of loyalty to the company, than because of a Premium feature I really needed. Loyalty alone is not a compelling enough to make people pay for a product/service. So, I switched back to a free plan, and nothing really changed. It would be much harder for the company to bring me back as a paying customer.

Even more so, because I have not seen a significant improvement on some core functionalities, which core users have been raving for, since version 2. Meanwhile, during the last few years, I have seen all kinds of features and additional services appearing in the Evernote ecosystem. All coming down to show that the company management won’t invest in maturing its long-time users to paying customers, as much as it would in attracting new free users. Of course, one can’t blame the management realizing that the situation is just a byproduct of the whole “go for growth” trend set by investors.

So, is Evernote a “dead unicorn”, the way investors and media see it? Perhaps. Will it die as a company? No. It will just have to work hard to convert more of its free users into actively paying customers. It has a mature enough business to stop calling itself a startup, and start working like a real business. Indeed, it is a less flashy future than being on the front pages of tech media sites every week, but not the least a less valuable and fulfilling one.

Moral

What’s the moral of the story? If you want to start a business, strive to bring in something valuable to the world, and don’t try to conform, if investors turn their backs on you. Focus on realistic growth, and make it clear to people, what value your business will bring to them. Make customers happy and they will gladly pay for your products in return. If you really believe in your idea, your business may indeed reach a unicorn status, but of the good kind - the long lasting kind. The one that withstands media and investor speculations.

Setting Up a New Mac for Development

Latest update: Nov 28th, 2015

Command-Line Utilities

Homebrew

An absolute must. If you have ever worked with apt-get on Ubuntu, you know that it’s the absolute developer bliss. Homebrew (or brew for short) is the missing package manager for OSX. Not only does it allow you to install/unisnstall/manage software with a few simple commands, the same way that apt-get does. It allows you to access and install ports of most of the cross-platform utilities that you might be familiar with from Linux, over to OSX.

Installing Brew

Simply open a new terminal and execute the following command:

1
ruby -e "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install)"

Installing the latest Java JDK

Most certainly your OS X comes with an outdated Java version or no Java at all. In any case, you sure need to install the latest one.

Where is JAVA_HOME?

If you don’t see the JAVA_HOME environment variable, don’t worry. You can easily set it, without having to know the exact path to your JDKs location. Add the following to your ~/.bash_profile file:

1
export JAVA_HOME="$(/usr/libexec/java_home -v <JAVA_VERSION>)"

Where JAVA_VERSION will most probably be 1.8, but could also be set to 1.7 or 1.6 (depending on which version you installed)

Setting up your SSH keys

SSH keys allow you to communicate with other machines or services, using an encrypted connection, and without having to type a password every time. Setting up an SSH key is highly recommended, if not even required by most services, used by developers: GitHub, Heroku, BitBucket, etc.

To generate a new SSH key, open a Terminal and type:

1
ssh-keygen -t rsa

and follow the instructions. By default, ssh-keygen will generate a private key id_rsa and a public one id_rsa.pub and put them in the ~/.ssh directory. You can, of course, decide to choose a different path and location for your key files. Keep in mind that if you do so, you would have to either explicitly tell applications where to look for your keys, or add configuration entires in your ~/.ssh/config file. Both ways will be explained in further sections.

Share your public key

One of the common uses of SSH keys is to connect to other machines, without having to supply a password. To be able to do so, you have to share your newly generated public key with the remote machine. Open a terminal and type:

1
cat ~/.ssh/id_rsa.pub | ssh <REMOTE_MACHINE_ADDRESS> "cat >> ~/.ssh/authorized_keys"

The above command will require your password once, then it will append your public key over to the ~/.ssh/authorized_keys file on the remote machine. The next time you log in to the remote machien via ssh, your password will no longer be required.

Fixing your Git settings

Whether Git comes preinstalled on your OS X, or you set it up separately, make sure to have a look at the default settigns that it comes with. There are a couple of important setting that you might need to set up yourselves. For instance, your name and email. If you commit to a collaborative Git repository, your email and name are used to identify your commits, and distinguish them from those of your peers. Therefore, make sure to check the following two settings:

1
2
git config --get user.name
git config --get user.email

and change them to appropriate values, if needed:

1
2
git config --global user.name "John Doe"
git config --global user.email johndoe@example.com

Set your core Git editor to the one you’re most comfortable with

Another setting you would want to have a look at, is your core Git editor. The default is set to vim. If you are like me, and you still haven’t managed to really master those vim skills, you should look forward to changing it with something more lightweight, like nano, for instance:

1
git config --global core.editor <YOUR_FAVORITE_EDITOR_EG_NANO_OR_EMACS>

A text editor is used all over the place by Git, so, better make sure to check that out.

Other Developer Utilities

Ruby

RVM (Ruby Version Manager)

RVM allows you to install and manage multiple Ruby environments on the same machine. This is really helpful, since often gems require a version of Ruby, different from the one that OSX gets shipped with.

Installing RVM

First and foremost, it is good to have Brew installed. You can use brew to install gpg - a security program used to check the security of the rvm download.

brew install gpg

Install the security key for rvm

command curl -sSL https://rvm.io/mpapis.asc | gpg --import -

And finally, rvm itself

curl -L https://get.rvm.io | bash -s stable

PHP

Composer

Composer is a dependency manager for PHP. It does the heaavy-duty job of automatically downloading dependency PHP libraries and organizing them in such a way that projects can use them right away. Composer is similar to Node.js' npm, Ruby’s gem or Java’s Ivy/Maven/Gradle (though Gradle and Maven are full-scale build systems. Downloading dependencies is just of the many things they’re used for).

Installing Composer

Composer can be installed at a project level, or globally (at the user level). Installing Composer locally:

1
2
3
$ curl -sS https://getcomposer.org/installer | php
# now, you can run Composer using php
$ php composer.phar

or globally (which is basically, moving composer.phar under //usr/local/bin/):

1
2
3
$ curl -sS https://getcomposer.org/installer | php
$ mv composer.phar /usr/local/bin/composer
$ composer

to be continued …

ngMock Helpers

Let’s take a controller test straight out of the Angular JS Docs:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
describe('PasswordController', function() {
  beforeEach(module('app'));

  var $controller;

  beforeEach(inject(function(_$controller_){
    // The injector unwraps the underscores (_) from around the parameter names when matching
    $controller = _$controller_;
  }));

  describe('$scope.grade', function() {
    it('sets the strength to "strong" if the password length is >8 chars', function() {
      var $scope = {};
      var controller = $controller('PasswordController', { $scope: $scope });
      $scope.password = 'longerthaneightchars';
      $scope.grade();
      expect($scope.strength).toEqual('strong');
    });
  });
});

If you are familiar with the syntax of behavior driven development testing frameworks, such as Mocha and Jasmine, you already know what describe() and it() are used for.

There are three peculiar things, though that an Angular JS novice would spot right away: module(‘app’), _$controller_, and inject(…). These helpers are not part of the testing framework itself, but are included in the optional Angular Mocks (ngMock) module. ngMock offers many additional niceties, in particular the $httpBackend interface for mocking XHR requests. In this post though, I will look at the three ones, mentioned above.

Where do these come from?

When setting up the testing environment for you Angular JS app, you are advised to make sure and install and add the Angular Mocks module:

bower install angular-mocks —save-dev

Depending on whether you are using an automated test executor or not, you should make sure to include the Angular Mocks before the start of your tests. For example, in case you are using the popular Karma runner, the <BOWER_COMPONENTS>/angular-mocks/angular-mocks.js file must be present in your Karma configuration file:

1
2
3
4
5
6
7
8
9
10
...

// list of files / patterns to load in the browser
// Make sure to add the angular-mocks.js file
files: [
  '...',
  '../www/lib/angular-mocks/angular-mocks.js',
],

...

The module() and inject() helper methods are both part of the angular.mock module. When Angular Mocks is included, it looks for whether Jasmine or Mocha are present, and attaches these two methods to window. This way they are easily accessible across all your unit tests:

1
2
3
4
5
6
7
8
// angular-mocks.js

if (window.jasmine || window.mocha) {
    ...
    window.module = angular.mock.module = function() { ... }
    ...
    window.inject = angular.mock.inject = function() { ... }
}

These two do exactly what they are supposed to. module(…) loads a particular module, whereas inject(…) manually injects a given component from that module.

What about the underscores?

Look at the following example:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
// In testing, we'd often want to inject a particular component
// and reuse it across tests

var myService;

beforeEach(inject(myService) {
    // This is ambiguous ...
    // it will make use of the local myService variable
    myService = myService;
});

it('uses my service', function() {
    //myService is undefined!!!
    myService.serviceMethod();
});

Adding an underscore before and after a desired parameter helps distinguish the local variable from the global one we want to set to:

1
2
3
4
5
6
7
8
9
10
11
var myService;

beforeEach(inject(_myService_) {
    // It is clear now what we want to do ...
    myService = _myService_;
});

it('uses my service', function() {
    // The service method call will work
    myService.serviceMethod();
});

This so called “underscore wrapping” is also made possible by Angular Mocks module. Whenever you wrap an expected component name with underscores, Angular Mocks will make sure to “unwrap” it and inject the proper dependency.

On Google+

Google’s latest decision to drop the requirement of having a Google+ account to be able to use its other products (i.e. YouTube), brought many to the conclusion that this is it for the service. Whether Google would pull the plug on G+ is unclear, yet highly probable. Sadly, this seems like the most logical decision to take. Sadly, because the one thing that Google+ has been good at - building a tight community of avid and outspoken individuals, has never been Google’s goal at the onset. It has always been about conquering the masses, and putting a hand on top of Facebook’s user empire.

I was one of the avid individuals who believed in Google+, and in one way or another, have remained true to their hearts until this day. Believe it or not, (probably not, but it’s true), but if you have been among the first few thousand early adopters of G+, you most probably have become friends with people, who even today, spend a significant part of their daily lives on the service.

90/10 as the new 80/20

Though not a real estimate, let’s assume that around 10% of all registered Google+ users are still active. In fact, they are so active that they contribute for around 90% (again, my own estimate) of the daily user activity on Google+. And, by active I mean loyal - posting regularly, talking to each other, giving feedback and advice, etc. In fact, of all social services I have ever been active on (Facebook and Twitter included), the level of engagement in Google+ has always dwarfed the rest by leagues and miles. That is, until recently.

For about a year, I’ve started seeing a serious outage of G+ activity. And, by that I don’t mean that the number of new users, quasi-forced by Google to open a G+ account has declined. No, I mean some serious activity outage by its most loyal users. They are still there, and keep actively posting and talking to each other. Yet, the intensity of the discussions and the feedback, the only two things I kept coming to G+ for, are not there anymore. Long gone are the times when we spent days without an end, commenting on the future and inevitable death of Flash.

The latest feature in Google+ - post collections, is interesting and somewhat useful. However, it too openly admits that G+ has long stopped being the place for posting original content, but rather re-posting and mashing-up content from outside sources. I can see some potential benefit for Google. Content curation is easier than content creation. Using humans to organize pieces of content can also be used for training Google’s machine learning intelligence. Though, as I’ve learned from my own experience building PinApp, curation does not create engagement, the same way as publishing own content does.When the engagement is gone, so will the motivation to keep curating.

I don’t think that building a niche service for Facebook-averse, tech-savvy early adopters, has ever been Google’s goal for launching G+. Yet, that’s what’s left of it. A slowly dying kind of it.