Spotting a “crappy” boss in the interview

“My boss is an asshole”

“My boss is the worst ever!”

“If I had know he was this bad I would never have taken this position!”

How many times have you felt this way? or have you heard your coworkers and friends complain about their managers?

Are those managers really “crappy assholes”?

Certainly some managers are universally horrible and the news media loves the really juicy stories. The sexist “handy” managers that confuse management with the opportunity to get free sexual favors is clearly an “asshole” manager.

Most of the time, managers are just simply average. The “crappy” comes not from the manager but from the manager-employee clash of styles and the manager’s lack of skill or willingness to work on improving their communication skills.

Things work out o.k. if the manager’s default management style matches with how an employee likes to be managed.

For example, a manager may prefer a friendly collaborative style. Employees that like an easygoing manager will find this manager excellent. Another employee who prefers a “decisive” manager, may regard the very same manager as “indecisive”. This employee may prefer a manager that makes quick decisions and sticks to it.

Another factor to consider is the management style set by the CEO or division manager. Managers adopt the management style preferred by the company as a whole. A company run by a cutthroat, hard-charging CEO will have managers who mimic this management style. It is rare that a successful manager adopts a different management style from the style set by the company culture.

In order to answer the original question:

How can I spot “crappy” bosses?

you must know answers to these questions. Keep in mind that there is no right answer, this is a reflection of how you are as a person:

How do you like to be managed?

  1. Do you need/like a manager that demands long hours?
  2. Do you like a manager that is collaborative or stays out of your way?
  3. Do you like to be rewarded based on your individual contribution or more on team results?
  4. Do you like a manager that is parsimonious with praise or do need praise more frequently?

What kind of communication do you want or need from a manager?

  1. “I want to hear from my manager every day, every week, or just once a year to get my bonus and 15% salary increase”
  2. “I need to a personal connection to my manager” v. “I don’t care about his kids.”
  3. “I want to check in daily with my manager to make sure that she is happy with my progress” v “My manager micromanages me because she is checking in more than once a week.”

How adaptable are you to different communication styles or managers?

An employee needs a certain amount of flexibility with regards to management style. However, there are management styles that are too different from your personality for you to thrive under that manager or in that company. This is not a personal failing. This is human nature. A person who can’t stand to be a sales person but thrives as an accountant is not failing because they prefer to be an accountant.

How adaptable is the company and the manager?

Some companies expend a great deal of effort to train managers; other companies expect employees to “be adults and deal with it.”

Knowing this will tell you how much wiggle room there is when matching styles.

In the interview

Once you know your style and what management styles will bring out your best performance, you can now prepare open-ended, innocent questions to determine the company’s management style.

Here are some suggestions:

  1. What kind of manager are you?
  2. Tell me about your best employee, what makes her so good?
  3. Tell me about how you reward employees?
  4. Tell me about how you deal with conflict between team members?
  5. How does an employee get noticed for promotion?
  6. How do you handle a project that is slipping?

Lastly, try not to compromise

If you know a certain management style is going to make you miserable; don’t go to the company. If you are super human, you might be able to “tough it out”. More likely, you will be miserable, underperform, and be fired.

Some times life doesn’t give us a choice and you have to deal with a management style that is not ideal for you. But this post is a starting point for understanding your misery and how you can adapt to your current situation. You might discover with conscious planning that you can thrive in a difficult management situation.

(Personal note: 58minutes to create)

Posted in management | Leave a comment

LinkedIn has lost its Vision

The Promise

A few years ago, I worked at LinkedIn. At that time, Reid Hoffman had a very clear vision for LinkedIn.

LinkedIn was “Resume 2.0” for the middle managers and the professional individual contributors who really make businesses function. LinkedIn would enable those people to highlight and show off their abilities.

Through LinkedIn, outside recruiters would see the LinkedIn members’ professional competency. The invisible professionals would get more economic opportunities.

LinkedIn members were members to be helped, not users to be exploited. This was a unique social bargain, unmatched by any other social network. LinkedIn members would keep their professional profile updated with their performance and skills. In exchange, LinkedIn would use that profile to help the members find new and better opportunities.

LinkedIn also built a member’s professional network: LinkedIn became a place to do reference checks in a quiet way. A place to find people without posting a job. A place to do business. A LinkedIn profile became a professional necessity: an electronic business card.

LinkedIn became the dominant business social network to conduct business.

In exchange, LinkedIn would then sell access to those members’ profiles to recruiters looking to hire the professionals. A win-win for all.

For a many years LinkedIn enjoyed their success with rich stock P/E multiples. 

Complacency Today

LinkedIn Stock Price

Earlier in 2016, LinkedIn lost its bloom. A year ago LNKD was at $269, today it is at $110. What happened?

LinkedIn has forgotten the unique “fuel” that powers the money machine: the members and their willingness to keep their profile up to date. 

This exchange I had with a college senior is typical:


But is more telling are these snapshots from LinkedIn’s own employees:




LinkedIn’s own employees don’t see the value of updating their own LinkedIn profile!

Amongst my friends, the general attitude for not updating their LinkedIn profile is one of the following:

  • “I am not looking for a new job”
  • “I am looking for a new job, but I don’t want my manager to know I am looking.”
  • “Or I just got a new job, and I don’t know if it is going to work out so I am not putting it my profile until I know that it will.”

Which then leaves the very pointed question:

“When exactly will someone update their LinkedIn profile”?

Remember an up-to-date profile is the LinkedIn’s key asset that powers LinkedIn Talent Solutions:

LinkedIn Revenue breakdown

62.5% of LNKD’s revenue depends on members keeping their profile up-to-date.

Outside recruiters still use LinkedIn as part of their recruitment process, however in-house recruiters that I have talked to get better results with With, recruiters know a person is actively looking, the resume is actively updated, and there is much more detail when compared to a LinkedIn profile. (Please note: I am a LNKD shareholder and I have no financial interest in Indeed)

If LinkedIn was demonstrating its true potential, the resume would be a subset of the LinkedIn profile. And Indeed would not be valuable to recruiters.

LinkedIn’s Misaligned Focus

Yet, LinkedIn’s focus is on… Sales Solutions and the Talent Solutions.

Lets take a look at the Sales Solutions product. Sales Solution depends on members having a quality network. It depends on members willing to keep their important business connections in LinkedIn.

With Talent Solutions and the members’ profile, the promise was members will get better jobs, more career advancement opportunities and more money-a direct, measurable, economic benefit to the members.

LinkedIn over the years has become a business card proxy. LinkedIn users hand out their Linkedin member url at conferences. They connect to people that they may not have an actual business relationship. Over the years, the LinkedIn network accumulates with people that are little more than distant memories. 

There is even less value to members to prune and update their network. LinkedIn offers minimal tools to members to annotate and get personal value from the LinkedIn member information. For me personally, I have to refer to my email history to figure out how and why I know a person in my LinkedIn network.

And now in 2016, LinkedIn wants to sell access to members network with Sales Solutions. Sales Solutions’ benefit to members is the opportunity to be lukewarm called instead of cold called.

This is not an incentive for members to use LinkedIn-they are now being a product that is being sold.

LinkedIn forgot to ask: what is the economic value to a member to keep their network up-to-date?

What LinkedIn needs to change: Future focus

As of today, members’ LinkedIn profile is all about the past: past job titles, past companies, past education.

What is completely missing is any sense of the future. As they say in the stock market reports, “past performance is not a guarantee of future results” What a LinkedIn professional has done in the past is not a guarantee of what they see in their future:

  • Is the member looking for new opportunities?
  • Does the member want to change careers?
  • Move to a new city or country? 
  • Does the LinkedIn member want to move from the corporate world to teach at a university?
  • Starting a company that combines professional skills with a hobby: Develop software for sailors? Or start a chain of swimming schools?

LinkedIn does not help members prepare for the future:

  • What skills does the member need within the next 5 years?
  • Is the member at risk because they are stagnating with regards to experience?
  • Which of two different job opportunities are more likely to pay off?
  • What skills are needed to make a career change?

Not eating the dog food

You may agree or not agree with me. If you choose to disagree, you need to ask and answer the “dog food” question.

Fundamentally, LinkedIn needs to look to their own employees and ask: 

why is your LinkedIn profile allowed to decay and be a mere skeleton?

In startup land there is the term: “Eat your own dog food”-prove that your startup’s product is valuable by using it internally.

If LinkedIn’s own employees’ don’t find LinkedIn valuable-investors are right to question LinkedIn’s value as a company.

Some discussion happening here

Posted in great ideas, management, social commentary | 19 Comments

Will cosmic radiation impose a maximum on computer functionality?

This IEEE article: How To Kill A Supercomputer: Dirty Power, Cosmic Rays, and Bad Solder —
Will future exascale supercomputers be able to withstand the steady onslaught of routine faults?

Cosmic rays are a fact of life, and as transistors get smaller, the amount of energy it takes to spontaneously flip a bit gets smaller, too. By 2023, when exascale computers—ones capable of performing 1018 operations per second—are predicted to arrive in the United States, transistors will likely be a third the size they are today, making them that much more prone to cosmic ray–induced errors. For this and other reasons, future exascale computers will be prone to crashing much more frequently than today’s supercomputers do. For me and others in the field, that prospect is one of the greatest impediments to making exascale computing a reality.

Some examples:

  • A high-profile example affected what was the second fastest supercomputer in the world in 2002, a machine called ASCI Q at Los Alamos National Laboratory. When it was first installed at the New Mexico lab, this computer couldn’t run more than an hour or so without crashing.
  • In the summer of 2003, Virginia Tech researchers built a large supercomputer out of 1,100 Apple Power Mac G5 computers. They called it Big Mac. To their dismay, they found that the failure rate was so high it was nearly impossible even to boot the whole system before it would crash.

    The problem was that the Power Mac G5 did not have error-correcting code (ECC) memory, and cosmic ray–induced particles were changing so many values in memory that out of the 1,100 Mac G5 computers, one was always crashing.

Everything from cosmic rays to weakly radioactive trace lines to power regulators failing would cause a supercomputer crash.

Fascinating stuff. I do have some questions that I hope to hear answers for at some point:

  • Each core has to be able to run ‘independently’ to some degree (like google) – what can the supercomputer field borrow from large data-centers?
  • why is the total state needed to be saved – why can’t the state preservation be at a core by core level?
  • why not have 3 cores work on each part of the problem and use consensus to determine the correct answer ( this is how the space shuttle operated ) ?
  • What is the nature of the problems that supercomputers are solving that prevents the google mass-of-computers solution from being used?

These questions lead to these interesting questions:

  • How will any meaningful quantum computer operate?
  • Biology has even more information / compute density – how does biology deal with errors – what can CS learn from biology?
  • will this prevent humans from being able to have good computers in space? ( assuming that we ever get off this rock in a meaningful way)
  • Is there an information theory in the making that can put a theoretical maximum on reliable information density based on radiation level? i.e. the error correction logic will consume any improvements in bit storage reduction? will radiation density impose a minimum trace thickness?
Posted in technical | Leave a comment

Google and Mountain View – the background to the dance

Most commentators on Hacker News had incomplete / wrong information.

Here is some basic background to help set the context.

  1. Google until recently basically ignored Mountain View south of 101. Until recently Google didn’t own anything south of 101.
  2. Google is not going anywhere. Mountain View is right next to Moffett Air field. This is where Larry and Sergey park their 767 ( ) – Maybe Sunnyvale would be an alternative but I really doubt there are too many spots where L and S can be in walking distance of their 767. Furthermore Google is leasing a shit ass land from NASA to expand on.
  3. Its not just Google: Google, LinkedIn, Intuit, Symantec, all have HQs here. Microsoft/Nokia also have their research centers here.
  4. Google, subsides all services at their HQ as a result outside retail/restaurants have pretty much died north of 101. This results in local business owners/voters complaining to city council.
  5. Google woke up to the political problem when MV CC told Google that they were not going to be allowed build a connection bridge across Stevens Creek Trail for buses. ( a trail that MV residents worked 25 years to create )
  6. Google land purchases have resulted in increases in real-estate taxes, however Google generates NO sales tax revenue for the city. (same problem with the other companies)
  7. Land in North Bayshore has gotten so expensive that there is serious talk about putting a building on top of the VTA North County Maintenance Yard.
  8. Mountain View renters have seen Y/Y increases of about 20-25%. A 2 bedroom/2 bath apartment goes for $3300.
  9. The city planners have gotten so many building requests that the city has no staff to deal with the requests.
  10. The school districts have real problems keeping teachers – turnover for some schools runs 6-10%.
  11. Non-tech workers are being forced out – you know people like teachers, waiters, security guards. The MV Building inspectors have found entire families living in a single room ( )
  12. The 3 new city council members are all pro-housing. The anti-housing candidates were rejected. Therefore comments along the lines of : stop whining and do something – well the new city council is doing something. Specifically, demanding that more housing gets built and less office space. We are trying to make room for the new people. In many cases these are our children.
  13. Google to its credit is starting to throw some money at Mountain View Capital Improvement Project list to help out.
  14. Google, Mountain View are working out a traffic management plan to reduce solo drivers – the whole of north bayshore is only accessible by 3 roads. Residents who live in north bayshore right now cannot get to/from their house between 8:30 and 10 and 4:30 and 6.
  15. Google is trying to be innovative and more inclusive with the city as a whole.

It would be excellent if more companies really improved the cities and communities they are in. Too often cities are place to mine for tax breaks and income.

There is more Google HW conversation at KQED.

Posted in political | Leave a comment

XML is not bad, just misused

There is the usual grumbling on Hacker News about XML

The problem isn’t with XML the problem is with the way it is used (or rather misused).

What are XMLs strengths:

  1. Valid files are definable. i.e. this element has a child elements or this element must contain only digits.
  2. Textual: errors are fixable with text editing tools.
  3. Textual: means compressible with gzip.
  4. all that “cruft”, “verboseness” makes XML files self-documenting. An XML element that looks like this:


    is clearly containing a first name. A developer reading the XML can figure out the meaning of the data in order to extract the information.

    Binary formats are definitely NOT! self-documenting. Once the binary data layout description document is lost, the data is lost even if the file is still present. For binary formats, the “definition document” is often the code that created the file. Lose the program or the hardware that can run the program and the data is lost.

    JSON is NOT self-documenting either. Did XML need to be heavy with the open/close tag syntax – probably not but gzip exists for a reason.

  5. when a standards body needs to define industry-standard interchange formats: For example, filing information with the Security Exchange Commission.
  6. data persistence across years in a way that will survive the programs that created it. (SEC filings, Government filings, Drug testing data, etc.) Or even a standard way to describe the Bible.

Losing data for many applications is not that big a deal. Does it really matter if Google can’t read old search history logs from 5 years ago. Probably not. However, it does matter if the SEC can’t pull up security files from 5 years ago, because that filing might be relevant to a criminal probe of a bank (hah-hah).

Ask NASA how much they wish they could have recovered lost satellite data from the 1970s. Especially when trying to figure out the rate of climate change. Bitrot and technology obsolence is a problem that has been talked about for many years.

XML is really appropriate for the data persistent situation – where there is high value to the data being accessible years later.

XML is NOT a good choice for:

  1. API calls. No company should use XML-RPC. Tighter formats exists. XML is hard to parse on small devices. Just use HTTP semantics.
  2. Files whose loss is inconsequential.
  3. Data Serialization
  4. Transferring data between systems under the control of a single entity.
  5. Any situation that requires speedy parsing or generation of data.

There is a reason why XHTML didn’t last. XML and its constraints are not suitable for the transient world of web pages.

Like any other tool, XML’s purpose and limitations should be known and respected.

Don’t complain about XML being a poor solution for a problem it was not intended to solve.

Posted in rants, technical | 1 Comment

California is awesome for business ( The myth of the importance of low taxes)

Really how important is business taxes? I talk about the myth of low tax utopia in this post.

The myth of the importance of low taxes

I decided to write this because of yet another debate on Hacker News about how California is somehow “losing” companies because of taxes:

You are thinking locally. Nobody cares where Tesla or Toyota build their cars. Having your business located “in the largest economy” isn’t a factor for a huge number of businesses.
Another way to put it is: You can build a massively successful business anywhere in the country. There is no particular magic in California in this regard. What is different here is that it cost more to simply exist in this state. And there’s far more to it than wages at play.

No one cares *where* their Tesla or Mazda is built. But they do care if it is built well.

Tesla does care if they can get qualified machinists to maintain the equipment that manufactures the Tesla. Sure Tesla can move to a random low tax location – and then spend years training the inexperienced low-quality workers available at the location.

It is possible, maybe. For low skill occupations, sure this is possible.

But the costs of running a business are much more than taxes and regulations:

  1. supply chain and logistics cost ( if suppliers are now further away there is now greater shipping and logistics management cost )
  2. infrastructure costs : does the selected location have an adequate power supply and transportation access
  3. Support services: everything from cleaners to the people who maintain the physical plant
  4. Access to airports so executives, customers, and suppliers can get to the plant
  5. relocation costs: moving existing infrastructure is not cheap.
  6. customer issues: if the company in question is in the middle of a supply chain: is the company now moving away from their primary customers facilities (which can make sales process more expensive and give less visibility to the customer needs)
  7. Will your key people agree to relocate?
  8. Is your key talent pool for new hires geographically fixed: There is a reason why Silicon Valley is successful place to be for software companies. There is a reason why New York/London is the place for finance. There is a reason why actors and actresses are in LA/New York/Bollywood : that is where the key talent is located.
  9. availability of labor: movies aren’t just about the actors and actresses. Film crews, set designers, lighting technicians, sound technicians, continuity experts, etc.: This is true for many, many industries. Detroit (area) is still the home of U.S. automakers for a reason.

So yes a business can make a decision based on taxes: Hollywood shoots movies in Vancouver, BC for a reason: but the main decisions and key talent are in LA and will always be in LA. Sure Mercedes-Benz could put a plant in Alabama and compel suppliers to move some facilities to be close to them. But could a MB supplier make the move on their own? No. That MB supplier can’t decide to move to North Dakota. Mercedes insists that part of the contractual agreement for their business is physical proximity.

When it comes to high value workers, California is the best:

  1. Non-compete agreements are unenforceable – I don’t have to worry that a job candidate will be blocked from joining my startup.
  2. Obamacare means that I can hire people from big companies without them having to worry about affordable healthcare.
  3. State government is pretty reasonable about things like Unemployment Insurance, etc. So I spend less time hassling with things.
  4. Lawyers and VCs have a reasonable attitude about certain things that make it easier to get shit done: less time dicking with screwy contracts: part of this is that California law makes some things illegal.
  5. educated population
  6. proximity to existing similar employers – new employees don’t have to relocate
Posted in marketing, political, rants | Leave a comment

Low business tax utopia myth

Really how important is business taxes?

Here is the TL;DR:

Businesses want the benefit of low taxes (lower immediate expenses) with the benefit of high taxes locations (government provide services that the business would otherwise have to pay).

But low tax utopia fails if no one pays high taxes needed to provide the services.

Basic business realities:

  1. Business depends on customers.
  2. Unemployment results in less customers
  3. Net costs must be lower than net revenue
  4. Customers must perceive a value to purchasing goods/service from the business at the price charged by the business
  5. It doesn’t matter how low cost the workers or the taxes are if there is no customers. A business without customers is out of business.
  6. If a business can get the benefit of a high-tax/high spending state without paying the taxes themselves, this is the best for that business.

Low tax utopia myth

That last point is critical. This is why companies try to get sweetheart tax deals to relocate to a state, city, or region. The business gets a lower tax rate than their competitors just down the street. The business gets the benefits of a quality infrastructure and a quality workforce at a discount.

Where the low tax utopia falls apart is when everyone gets their taxes lowered OR where the taxes are applied regressively so as to directly affect the ability of customers to buy goods.

An experiment has been conducted that proves this point. In 2012, California raised its taxes; Kansas slashed its income taxes and other taxes.

In 2014, California is paying down its debt and is able to fund services at a higher level, unemployment is falling. Obamacare has rolled out, health insurance premiums are dropping. California is planning for the future with a new water project, High Speed Rail, and education funding. California pays more in federal tax dollars than it receives. Governor Brown is cruising to an easy reelection, and as far as I can tell the Democrats are cruising on all statewide offices. I have not seen 1 TV ad nor received any mailer about any statewide office.

In 2014, Kansas’s state revenue is falling, schools are closing, economy is going down, unemployment is rising. Kansan debt is being downgraded by the rating agencies. Kansas is in survival mode. Gov. Sam Brownback is in serious trouble. Republicans are endorsing the Democratic challenger.

To summarize, in Kansas, low taxes resulted:

  1. in less employment
  2. lower services
  3. local businesses losing customers
  4. lower quality of life
  5. lower ability to invest in transportation infrastructure

This article from Forbes does a nice job of highlighting results of the Kansan experiment:

The tax cuts in Kansas have been breathtaking. In 2012, at Brownback’s urging, the legislature cut individual tax rates by 25 percent and repealed the tax on sole proprietorships and other “pass-through” businesses. It also increased the standard deduction (though it eliminated some individual credits as well).

In 2013, the legislature cut taxes again. It passed a measure to gradually lower rates even more over five years. By 2018, the top rate, which was 6.45 percent in 2012, will fall to 3.9 percent. It also partially restored some of the credits it eliminated in 2012. This time, it did raise some offsetting revenue for the first few years but far less than the statutory tax cuts. The Center on Budget & Policy Priorities wrote up a nice summary of all the tax changes.

So what happened after all those tax cuts? Revenues collapsed.

From June, 2013 to June, 2014, all Kansas tax revenue plunged by 11 percent. Individual income taxes fell from $2.9 billion to $2.2 billion and all income tax collections plummeted from $3.3 billion to $2.6 billion, a drop of more than 20 percent.

Besides, while Kansas individual income tax revenues bumped up a bit in 2013 over 2012 (as the fiscal cliff theory would suggest), the increase was only about $23 million. From 2013 to 2014, income tax revenue dropped by far more–by $713 million.

And that brings us to the bottom line. Since the first round of tax cuts, job growth in Kansas has lagged the U.S. economy. So have personal incomes. While more small businesses were formed, many of them were merely individuals taking advantage of the newly tax-free status of those firms by redefining themselves as businesses.

The business boom predicted by tax cut advocates has not happened, and it certainly has not come remotely close to offsetting the static revenue loss from the legislated tax cuts.

As Forbes says:

Kansas Governor Sam Brownback and his state legislature have embarked on a wonderful natural experiment. Once again we are testing the question: Can tax cuts pay for themselves? The answer– yet again– is a resounding no.

Posted in political | 1 Comment

Yerka – a great innovation on making a bicycle unstealable!

What a great idea! For a thief to really profit from stealing this bike they have to do extra work. Plus they have to be willing to do this extra work while stealing the bike. They can’t just cut a chain and go.

The students built the first working prototype bike, code-named the Yerka Project. The bike’s down tube at the bottom of the frame splits apart and wraps around an object such as a post or tree, and the bike’s seat post is then slotted through both ends of the down tube to complete the lock. Once you’ve got everything in place, you simply take out the lock pin on the end of the down tube to seal the lock. The whole process takes less than 20 seconds, letting you be on your way in almost no time.

Clearly the any lock could be broken at some point but if the lock is in the seat post, it is very protected. If the Yerka bike makes the post not anchored to the frame portion then the post just spins if someone tries to break the lock with leverage.

Posted in great ideas, random silliness | Leave a comment

Do not let client code pick ids

I am having a discussion over on Meteor Hacks about how bad it is to let the client code pick ids.

The meteorhacks post shows how to make new items show up in a sample Meteor program before the server has completed its processing. The meteor folks call it latency compensation.

The problem is that latency compensation (in its current form) relies on the client code choosing the permanent id.{
  "click button": function() {
    var title = $('#title').val();
    var content = $('#content').val();

    var post = {
      title: title,
      content: content,
      _id: Meteor.uuid()
    };'addPost', post, function(err) {
      if(err) {

    Router.go('/post/' + post._id);

Key points for non-Meteor developers to be aware of:

  1. Meteor is a node.js framework. Server code is in javascript
  2. good Meteor code is written so as to be used both on the client and the server.
  3. Meteor has a ‘minimongodb’ that acts as a partial cache of the items in the server database.

Now if we rely on the client to select the database ids used by the server, there are a number of interesting attacks.

The original example was inserting new Posts. If we wanted to make a more exciting example, a permission table (i.e. Users or Roles could be the attack target).

Example #1: Insert race condition so the attacker can own the inserted post:

The attacker deliberately chooses an id known to exist. If the server code in question uses Posts.upsert() rather than Posts.insert() the attacker can modify an existing post and take control of the post.

Remember that in good Meteor code, the server and the client will share a lot of code, thus it is likely that an attacker could spot upsert() usages. But that isn’t really necessary.

The “counter” is “but, but the original insert will then fail”. However, lets use some real examples.

In the real world, the good client is trying to send the new post to the server over a flaky internet or to a busy server.

  1. The good client tries to send the new post but gets no response from the server.
  2. The good client tries again which succeeds on the server-side. However, the server response was lost.
  3. The good client tries a third time, the server reports the post exists.
  4. Finally, client then tells the server to please add that post to its ‘hot list of posts’ that is always displayed.

Flaky internet / busy servers are a fact of life.

Now make a slight alteration. On step #2 instead of the good client sending the second attempt it is really the attacker.

  1. The good client tries to send the new post but gets no response from the server.
  2. The good client tries againThe attacker sends a post insert which succeeds on the server-side.
  3. The good client tries a second time, the server reports the (attacker’s) post exists.
  4. Finally, client then tells the server to please add the attacker’s post to its ‘hot list of posts’ that is always displayed.

But how could the attacker know the the id that the client used? ( see below for that answer)

Example #2: (modifying an existing record).

However, if instead of Posts.insert() the post creation code is using upsert then the malicious client can happily modify whatever it wants into the database. Because the post upsert code is shared between client and server. The malicious hacker can just scan the code looking for places where upsert is used.

“But, but why would anyone use upsert on Posts creation?” Why was upsert created? Because it is useful for cases where the developer does not want to do a query + insert combination. Client id generation + upsert() = attacker fun.

This can be most fun in situations where an attacker can grant some privileges to the victim. So the victim’s call look successful, but the attacker can still control the database Post entry.

Example #3: ( Flooding the Db – and why the attacker does not need to know the actual id generated)

This is a brute force attack – but lets say that there was a weakness in the way the client chose the id. A predictable pattern. Remember uuid() is concerned with uniqueness NOT unpredictability.

If an attacker can predict what the set of possible uuids will be, the attacker can flood the server with bogus posts having likely ids as predicted by analyzing the uuid() algorithm. Note that an attacker can confirm their ability to predict the id by generating their own posts.

If the attacker targets a table that is likely to have a large number of new entries, the change for the attacker to have a successful collision increases.

Fundamentally, the client should never be trusted with the ability to select anything that effects how things work.

Posted in technical | 8 Comments

In praise of Javascript’s undefined

One of the unique[*] features of javascript is two different ways to represent ‘no value’: null and undefined.

[*] = unique as many popular languages do not have both null and undefined.

In my years and years of programming Java, I always missed not having a way to represent the ‘not value YET’ condition.

null always end up being needed to represent the ‘no value’ condition which made it difficult to handle the ‘no value YET’ condition.

‘no value’ is a definitive terminal condition: there is no value available. Period.

‘no value YET’ means the information is still being retrieved/calculate, a non-terminal condition. The requestor should ask again at some future date.

The ‘no value YET’ condition occurs frequently in internet coding. For example, server A initiates a request to another server, server B, for some information (V). V could take seconds to arrive or may never arrive.

The code on server A wishes to continue to run while waiting for server B to respond with the value of V. What value for V should be used for the server A code that is asking for V‘s value? In an idealized world all the calling code uses concepts like Promises and Futures and the problem is kicked upstairs.

In the real world, often times this doesn’t work in practice:

  1. This change almost certainly requires changing the contract about the object/value being returned. Every caller would then need to be changed. Often times this change will cause ripple effects that extend through massive amounts of the code.
  2. external libraries are not designed to deal with the complexities of getting V they just want V.
  3. Maybe V is part of a larger calculation where some value of V must be used. The caller supplies defaultV if there is no V. But the calculation involving V should be skipped if no V is available.
  4. Callers have their own timing constraints and may not be able to wait for V to arrive.
  5. the caller is happy to use a cached value of V even if Vcached is expired.
  6. exposing the details of the V being ‘on its way’ is too much information for the caller. The caller doesn’t have any alternative path available
  7. returning a value that is some type other than the return type expected by the caller is breaking the function’s contract with all the existing calling code.
  8. Lastly, leaking the implementation details of getting V means that if those details change all the callers are impacted.

This is where the beauty of null and undefined come into play.

For those callers that just need V today, null and undefined can be easily tried identically:

var V = undefined;
console.log( V == null );
console.log( null == null );

both return true.

so ….

if ( V == null ) V = defaultV;

Which is the same code the callers previously had. Thus, the vast majority of the existing callers continue unchanged.

For the minority of callers wanting to know if they should come back for the final answer:

V === undefined

will give them the answer they need.

Posted in software design, technical | Leave a comment