INTP

Blog

Investing in Software

In today’s technology and data driven landscape, some of the largest business expenditures are allocated towards software. This may include investment in the form of tech startups developing the platform which will generate revenue, tech-enabled companies seeking enterprise software or automation that will cut long-term costs, companies seeking to make sense of raw market data, control logic for embedded systems, and virtually every other kind of software-based venture. Upon reading this article, the reader should gain a basic understanding of project selection using metrics such as Internal Rate of Return (IRR), hurdle rates, opportunity cost, requirement identification using methods such as creating personas, requirement communication using engineering documents such as Unified Modeling Language (UML) for the back-end and wireframes for the front-end, and finally, verifying and shaping software investment via Quality Assurance (QA) processes.

What difference does it make how much you have? What you do not have amounts to much more. - Seneca the Younger

A universal challenge that every business entity experiences is that of scarcity. The competency in leveraging the limited resources available is often what determines the fate of an entity. Thus, the necessity arises in accurately selecting which projects to allocate resources towards, identifying the requirements for those selected projects, fully communicating the requirements to the engineering team, and assuring the requirements have been met correctly upon completion. It is only upon the understanding and execution of the process just mentioned that one may invest in software with minimal risk and receive a software platform that conforms closely to the original vision.

Establishing Business Strategy and Investment Philosophy

It is recommended that the first step an entity takes in selecting its investments is conducting business analysis that gets reflected into a business plan or strategy, and acquiring a set of goals and philosophies that guide future behavior such as investment selection. This is a living, breathing document that outlines the organization’s goals, philosophy, competitive approaches, market information, and financial information. The United States Small Business Association has great content regarding business plan creation that can be found here.

Among the most important sections of this plan is that which is concerned with market analysis. Information about the competitors, customers, and the market as a whole provides a great background for the development of a goal and strategy. The creation and ongoing maintenance of a competitive matrix within the business plan is, thus, a vital tool in determining whether the entity will compete on exclusively price or differentiation. Further, the activities of the competing businesses and their corresponding results may be observed in order to lessen the learning curve into market domination.

'A competitive matrix is a tool that lets a company know where it stands relative to the competition with a quick glance.'

On the demand side of the market, target demographic analysis should be conducted to determine key consumers, their preferences, and their real income. This is where it is suggested to begin creating personas, or fictional representations of the target demographic groups, including names, ages, interests, real income, and so forth. Because acknowledgment of the nature of the end-user is vital to the firm’s success, these personas will be referred to all stages of the investment process. Surveys completed by the target customers should help the entity determine their services’ indifference price points (IPP) as well as some preliminary requirements and use cases.

In order to optimize the return on investment, a value based pricing approach centered around the Price Sensitivity Meter (PSM), which incorporates the IPP, should be used. Some questions that need to be asked to the target customers to determine this figure are:

  1. What is the maximum price you would consider paying for this service?

  2. What is the minimum price you would pay without having concerns as to the quality of the service?

  3. What is a price you would pay for this service, but give significant consideration as to whether it is too high?

  4. What is a price you would pay for this service that makes you feel as if you received a bargain?

After surveying a good sample size of the target demographic, a plot of each individual’s “expensive” curves and “cheap” curves may be created, where the expensive curves are composed of drawing a line between the prices determined from questions 1 and 3 above, and the cheap curves are composed of drawing a line between the prices determined from questions 2 and 4. Where this line intersects is the individual’s IPP. The mean value of all the individual’s IPPs may be calculated in order to determine the market IPP, or the target price for the service.

'Determining the indifference price point is crucial in value-based pricing and allows revenue optimization in relation to the pricing of a good or service.'

For the sake of completeness, the other two primary methods of price determination will be mentioned: cost-plus and competitor proxy. In cost-plus pricing, a predetermined profit margin is set above operating cost. This is the least effective method of pricing. The competitor proxy method simply reflects the pricing used by the competitors. Of course, there are several other methods for pricing; however, the three listed are the ones used most often.

From the supply and demand side market analysis, financial forecasts as a function of sales may be created. The operating cost, revenue, and profit forecasts are all used in determining a firm’s method of competition. For example, a firm with high fixed costs, such as an airline company, may wish to compete on price in order to cover these costs as quickly as possible.

Another challenge that a firm with high fixed costs may experience is that related to its economies of scale, which directly impact its returns on investments. There are two approaches to this dilemma; the firm may either invest a large amount of upfront capital to optimize production while maybe operating at a fraction of capacity until sales can catch up, or the firm may scale according to its capacity, in which case it will not realize its potential profits. Both of these approaches are valid given certain contexts and should be considered seriously when crafting an investment strategy.

With the competitive and financial landscape in mind, an executive summary should be crafted that reflects the goals and strategies of competition for the entity. From this, all the possible projects that will help movement in the direction of the goals given the entity’s current state may be enumerated. At this stage, it should be simple to identify which projects are necessary to the self-preservation of the entity, how the projects rank according to the entity’s competitive strategy, how the projects will shape internal operations, and which projects induce revenue versus those that cut cost, eliminate risk, or provide a differentiating feature.

Selecting Investments

Each business entity in existence faces a wall of constraints in the form of financial capital, temporal capital, and labor, among all the other factors of production. Further, most firms, unless they have achieved the ultimate goal of capitalism to become a monopoly or a protected member of an oligopoly, are fighting a bloody warfare against their competitors for market share in what is referred to as a “perfectly competitive market.” Thus, a firm’s route to self-preservation and growth needs to reflect its competitive landscape given its internal constraints.

After a thorough qualitative analysis of the business, its goals, and its competitive approach has been executed, a quantitative analysis is able to determine the next actionable steps a company should take. This article will focus on technology-based projects; however, the process is the same for all projects and is standard in corporate finance. This section helps explain how to identify the next software investment to be made, whether it is an out-of-the-box solution, a fully custom solution, automation software, or anything else software-based.

Net Present Value (NPV) may be thought of as the summation of all the cash flows discounted to today’s value using a discount rate, which is typically determined from the Weighted Average Cost of Capital (WACC) incurred by the firm. One cannot simply apply the summation of all the individual cash flows because Time Value of Money (TVM) dictates that a dollar in the hand today is worth more than a dollar that will be received one year from now, due to the opportunity cost of possible investments during that time period. The NPV for all the potential investments must be calculated, where, given the discount rate, a positive NPV indicates the project will be profitable and a negative NPV warns about undertaking the project in the first place. The NPV of a project may be calculated using the formula below.

'The Net Present Value formula aggregates all discounted future cash flows and yields what the project value is presently.'

It is, however, much simpler to use a spreadsheet tool such as Microsoft Office or LibreOffice Calc to determine this figure.

'The npv() function is used in LibreOffice to determine the net present value of the sample project given the initial investment and the projected cash flows.'

The Internal Rate of Return (IRR) of a project is defined by the interest rate at which all cash flows equal zero using the Net Present Value equation. A higher IRR tends to indicate a higher Return on Investment (ROI). However, it is important to keep in mind that IRR reflects an interest rate, and that the actual magnitude of the returns needs to be kept in mind as well. Once a series of cash flows, starting with the initial investment, have been forecast, determining the IRR is simple with a spreadsheet tool and the irr() function, similar to how NPV was calculated in the example above.

Note that the issuance of debt by the company will affect the cash flows. A levered company will experience cash flows with lesser magnitude than those of its hypothetical unlevered clone firm. As the debt-to-equity ratio increases, the beta, or volatility, of the company increases as well. This signifies the fact that variance in equity distributions are at the mercy of repayment to creditors. The beta will be used in determining another key metric, the hurdle rate, that will be used in the investment selection model. Further, note that the cash flows used by the models are nominal, and not real. This means that they neglect outside factors such as inflation.

The hurdle rate is the “hurdle” interest rate return that a project must leap over in order to be considered for selection. There are several valid methods used for determining this figure. The most common of these involves using the capital asset price model to determine the risk-adjusted hurdle rate.

'The capital asset pricing model is often used to determine hurdle rate.'

As the formula above demonstrates, a greater value in beta indicates higher risk and will yield a higher hurdle rate. Some companies opt out of the risk adjusted hurdle rate determination altogether and instead use other metrics, such as their WACC or even arbitrary values.

Projects with IRRs lesser than that of the hurdle rate may now be eliminated from consideration. There is one caveat, though, and it comes in the form of opportunity cost. For example, the development of the core revenue generating platform carries with it great risk, which, if quantified, might be reflected in a hurdle rate significantly higher than any currently quantifiable IRRs. However, not undertaking this project would signify the firm choosing to not pursue the enterprise entirely. That is a massive opportunity cost that the firm, if they do wish to compete in the industry, cannot incur. Further, companies with large economies of scale wishing to starve competitors out of business may offer their services below their marginal cost of production, incurring a temporary loss for a long-term strategy.

Thus, it is apparent that IRR, NPV, hurdle rate, and opportunity cost, are merely tools to be used in a broader, more comprehensive analysis. This is typically where methods such as minimax decision tree analysis and real options valuation come into play in order to make the most effective decision possible, given current market conditions and risk. In the end, portfolio and project selection are an art that are best done after hands-on experience has led to an innate feel for the mechanics of a particular industry.

Identifying Project Requirements

Once resources have been selected to be devoted towards a project, identifying the software requirements comes next. This step entails the creation of a software profile, the preliminary documentation detailing the behavioral and structural analysis of the system that will be used to build the engineering documents in the next phase. Determining the solutions to the general questions below help guide the investment team in the right direction.

  1. For what problem(s) is this software a solution? Some potential problems may be that the business has not yet developed the platform central to its revenue generation model, that the firm needs to make sense of its raw data to determine market trends, the increase in automation and efficiency needs to be brought about to decrease operating costs, and so forth.

  2. Will this software be for enterprise, consumer, or business use?

  3. Who are the end users of the system? Enumerate every type of end-user and their corresponding persona.

  4. What will be each of the end users’ actions? Enumerate every possible workflow they may face. For example, one administrator workflow may be to add new employees to the system.

  5. Is the software to interface with other technical systems? If so, which ones are they? For example, does the software need to interface with third party software or a previously developed company database?

  6. Are there any liabilities, such as PCI compliance or private user information, that must be addressed in the design?

  7. Is the system required to warehouse and/or analyze big data, meaning data that cannot be computed on a single node? If you can open the data set in Excel without depleting your RAM and virtual memory, it is not big data. If so, will this big data be analyzed retrospectively or live via a stream? What type of analysis needs to be performed on the data? What is the process for actually warehousing the data once it has been used?

  8. Across what systems will the software be deployed? Enumerate every platform, such as web server, mobile phone, VPN-connected office desktop, and so forth, over which this software will run. How will the software be deployed? For example, will it be hosted on a web server for mobile phone apps to query?

The next and final portion of the requirement gathering phase is to differentiate core functionality from features. By separating the functionality requirements into these two categories, a better deployment plan may be formulated. Many times, a product encapsulating the core functionalities will be pushed out as soon as possible, whether it is to production or to the beta testing process, allowing minimal capital expenditure and temporal opportunity cost. This also allows the company to refine the core functionality offering sooner than later before finalizing the features. These features are typically added afterwards and yield diminishing marginal returns to the end product.

Communicating Project Requirements

If you are a stakeholder in a technology-enabled company, then it is vital for you to be able to concretely grasp the layout of your supporting technologies. Just as how a stakeholder in the construction of a skyscraper would need to be familiarized with building plans and blueprints, a software stakeholder would need similar engineering and architectural documentation for accurate assessment and decision making. Along these lines, software investments must be described by both the requirements of the functionality, or back end, of the system, and those of the user experience (UX), or front end.

Describing the Back End

The back end of the platform is communicated to the technical team via Unified Modeling Language (UML). UML entails 14 different types of diagrams that describe the behavioral and structural components of the system, although not all of them need to be used to fully and accurately describe the software, and only the higher-level diagrams need to be created by the business team before handing them off to the technical team, who may complete the rest.

The first question that needs to be addressed pertains to who will use the software? The list of user types generated in the previous phase may now be reflected onto a diagram in the form of stick figures, ideally with their personas labeled adjacent to them. The next question to be tackled is for what will they use the software? Bound to the stick figures with a connecting line may now be placed scenario bubbles that describe the users’ actions on the system. This is the simplest, yet one of the most important, UML diagrams and is referred to as the use case diagram.

'In a Use Case Diagram, a stick figure represents a user and the connected scenario bubbles represent intents of that user.  Image taken from www.tutorialspoint.com'

It is a good idea to have only broad actions represented by the scenario bubbles connected to the user. If further sub-actions comprising the broad action are required to eliminate design ambiguity, then they may be attached to the generic scenario bubble. The broad actions are typically referred to as sea-level actions and the individual processes that comprise these are referred to as fish-level actions. The bottom line is that enough scenario bubbles should be included to provide an unambiguous description of the actions of each user to each of the stakeholders and developers, but they should not exceed the quantity where they clutter the diagram by describing obvious, menial tasks.

The UML diagram above handled the question of who will use the software, so the next question is how will they use the software? This is where another UML behavioral diagram called the activity diagram comes into play.

'An activity diagram lays out the workflows of the users on the software.  Image taken from www.tutorialspoint.com'

Observe that the activity diagram merely lays out all of the workflows described in the previous phase in a graphical manner that is easy to implement in the source code.

The next engineering chart that the business and investment team should create is the deployment diagram. This describes how the software is to be hosted and run. For example, a mobile app diagram would include a representation of the mobile device, the web server it interacts with, and the database server from which the web server submits queries.

'The deployment diagram lets stakeholders know how their software will be hosted.  Image taken from www.tutorialspoint.com'

Technical business and investment teams also need to be familiar with the basic concepts of Object Oriented Programming (OOP). There are several other programming paradigms, like functional programming and imperative programming; however, the business world has adopted OOP for most purposes for several specific reasons. At the core of OOP is a model of viewing the world in terms of objects, each containing attributes and methods. Take a car object, for example; it has certain attributes, such as number of doors, color, transmission type, et cetera, and methods, such as accelerate, brake, turn left, turn right, toggle the air conditioning, et cetera. A well-organized software system will be a thriving ecosystem of different objects identifying themselves with several attributes and interacting with one another and the world via their methods. There are other principles of OOP, such as inheritance, encapsulation, and polymorphism, but these are not as critical to know to the investment team, although knowledge of these subjects helps to better understand and monitor the health of the system. Because of its intuitive universal modeling perspective, OOP allows for an intuitive approach developing the software and an easier time in recruiting developers, as it is at the heart of most computer science curricula. Further, it helps to decrease technical debt, the temporal and capital expenditure that will be paid at some point in the future for maintenance and upgrades, by persuading the development of organized and self-documented code.

'At INTP, we have a joke that the rise of OOP has eliminated all actual work from the Human Resources departments in software-based companies.  With the modern, business-friendly languages such as C#, Java, and Swift, one may go to the zoo, slap a suit and tie on a gorilla, dub them a developer, and still get code that compiles.  Most likely, the code won’t be pretty, but it will do the job for the time being.  Of course, HR departments perform some of the most important work that determines the health of an organization so we do not wish to anger our friends in these roles.'

From knowledge of OOP, the business team may then construct what is referred to as a class diagram. All objects are instantiated from classes, blueprints that define the nature of the objects themselves. A class diagram is simply a graphical representation of how the classes interact with one another and what are their individual attributes and methods. An investor or businessperson without software engineering experience will not be able to complete these class diagrams fully; however, it is important that they understand the landscape of the software where investment is allocated as well as communicate their vision of the intrinsic nature of the objects in the system and their interactions with one another.

'The class diagram is an overview of the ecosystem created by the objects in the software.  It is one of the most important documents for the business team to understand how their software actually functions.  Image taken from www.tutorialspoint.com'

With the creation of the four diagrams mentioned above, the business team is able to better understand exactly for exactly what capital is being allocated, communicate their vision to the engineering team clearly and succinctly, and incur savings by placing the general structure and thought of their software internally, as it should be in the first place.

Some great tools that INTP uses frequently for the purposes of generating UML diagrams are PlantUML and Dia.

Describing the Front End

The front end tends to be a bit more intuitive to describe for most people. Via graphical mockups and wireframes, a business unit is able to fully communicate how they envision the front end should look, given their target demographics and corresponding personas.

A mockup is simply a visual depiction of all the screens that will interface with the end user. Many times, these may be created in Microsoft Paint, GIMP, or more specialized tools. Wireframes indicate how these screens yield to one another. For example, part of a platform’s wireframe may entail a mockup of a login screen with an arrow to a mockup of a home screen.

This is a critical stage of requirement communication, as the UX imposed onto the end user is vital for smooth adoption of the system, especially in publicly-facing software; however, it is often overlooked in importance by those new to the industry. How easily the user is able to complete the tasks required of them on the system without expending too much effort or time in figuring how to accomplish their goals, called a Natural User Interface (NUI), is often the determining factor in the success of software. This objective of the investment team is to ensure the UX may be classified as a NUI, given their target demographics, to the point that the end user exerts little to no effort in navigating around the platform, almost as if in a hypnotic trance. One interesting marketing trend that relates to this states that there are greater conversions in advertisements that appeal solely to the primitive reptilian cortex, dealing primarily with immediate base desires of the individual, than those who appeal to higher levels of the brain devoted to logic and thought, such as the neocortex. The bottom line is that greater user adoption and retention will occur once provided an environment where the end user does not have to use their brain!

'Microsoft modeled their GUIs in such a way to best appeal to what they referred to as the “dumb user.”  Of course, several people will state that they merely copied Apple and the genius of Steve Jobs in realizing that adherence to end user psychology yields greatest conversions.  Even fewer will mention how Apple copied from the X window system developed at MIT…  There really is nothing new under the Sun.  This an image of the X window system that is responsible for making computers accessible to the average user.'

Monitoring Development of Investment

Once the requirements have been handed off to the engineering team, it is important to enforce the proper project management strategy among the team so that development time is minimized and so that discrepancies between vision and platform are detected early and easily settled. The most prevalent software project management paradigm is known as agile.

The goal of Agile is to decrease the incremental development loop cycle so that investors and project stakeholders may be provided greater transparency throughout the entire development of the platform. This means that there is, at all times, a working version of the platform that may be demonstrated to the stakeholders. At the heart of many Agile operations is what is referred to as the scrum. The scrum is a quick daily meeting of all the developers on what they were able to accomplish the previous day and what will be completed the current day. It provides the entire team an opportunity to gain better perspective on the technical inner-workings of the system as a whole. If possible, it may be beneficial to the business team to be present for these brief meetings to get a better feel of the state of their investment.

A more traditional software project management paradigm is the waterfall. In this system, development is very sequential, meaning it takes longer, incurring greater investment capital, and iterations of the platform take much longer to achieve, decreasing transparency to the stakeholders as to the current state of the system. Thus, it is advised to always impose an agile project management paradigm in order to maximize the return on investment.

Performing Quality Assurance (QA) on the Investment

Once the engineering team has executed the requirements relayed to them, it is imperative to verify the integrity of the platform via the QA process. A rigorous alpha testing process, or internal testing, should be followed so that the software, once released to beta or production, offers minimal to no bugs to the end user. Ideally, the engineering team conducts a QA session prior to handing off the deliverable to the business team.

The QA process is tedious and its goal is to find as many problems with the software as possible. It typically entails going through each of the workflows enumerated in the requirement identification phase several times to ensure that they are intuitive to the end user and consistently free of bugs.

Many times, it is advised that a beta testing phase ensues, marked by exposure of a select few clients that fall into the target demographic groups to test out the platform before its hard launch into production. Working closely with the beta testers allows the business team to determine the actual ease of use of the software, as well as to receive feedback that may prompt further iterations of the software prior to release.

The final assurance of quality for the software is to release it into production and test the initial revenue-generating hypotheses. In the end, there is no greater judge for the merit of an investment than the marketplace itself.

'By following the software investment approach laid out in this article, the investor gains maximum control over their technology portfolio.'

Summary

Successful investing in software, technology startups, and technology-enabled projects entails knowledge of several principles. As in most investments, understanding basic finance principles, the nature of the industry, and the competitive approaches of a company come first. Next, the investors and stakeholders need to comprehend the more technical aspects of software engineering and project management and how they factor into the bottom line. This is especially true in startups and small businesses, where the allocation of limited free capital determines the fate of the entity. Although it is not as important when investing in large corporations, it does provide further insight as to how portfolio selection occurs, considering all of the possible projects that may be developed, how risk is factored in, and for what the capital is actually devoted.

By following the process of understanding the underlying financial principles, selecting the proper allocation of investment in the form of temporal and financial capital, identifying investment requirements, relaying these requirements to the engineering team, overseeing the development of the investment, assuring the quality of the investment, and finally, releasing it into the marketplace, software investors take maximum control over their investments.

Reducing Your Business Costs Via Combinatorics and Traveling Salespeople

Upon reading this article, the reader should understand the basics of combinatorics and how they are useful to increasing business profit. Further, the reader will be familiarized with the classic combinatorial optimization example of the Traveling Salesperson Problem (TSP), used heavily in artificial intelligence, industrial engineering, and computer science.

We live on a placid island of ignorance in the midst of black seas of infinity, and it was not meant that we should voyage far. - H.P. Lovecraft

Real World Example

In early 2016, a logistics company approached INTP to provide consulting services aimed at helping them decrease their costs, in turn increasing profit. The nature of their business was to determine the routes that a package or inventory would take from the point it left the possession of the entity sending it to its final destination. As the client identified all of the variables involved in this process, it became apparent very quickly that this would entail no trivial solution.

At the head of this process, the inventory could either be picked up at a specified location via courier or freight truck in several major cities or handed directly into a depository at even more numerous available locations. From here, the inventory would sit in the warehouse, accruing storage costs, until the transport phase was to be initiated.

The transport phase entailed either moving the inventory to the final depository, many times iterating through intermediary storage facilities, upon which a team of couriers would deliver the inventory to the final destination. Land transport occurred primarily through freight trucks, USPS (which apparently even FedEx and UPS use somewhat often as the backbone of their transportation model), and airplanes, and cross-continental transport primarily entailed the use of cargo ships and airplanes.

Let us enumerate the complexity of the problem stated above:

  1. A request for delivery would enter the system. The inventory would be picked up by a courier if small enough, and by freight truck if it were too large. Either way, the magnitude of human capital was constrained in number and an effective route always needed to be determined, limiting travel distance, time, and costs.

  2. The inventory would sit in a warehouse, where it might be batched with other inventory, and sent to another depository, which often would not be the final storage facility. The costs associated with storage, direct labor, and opportunity cost in not transporting ideal batch sizes needed to be minimized.

  3. The actual transport mechanism was selected of various options, each with varying capital costs, time costs, and constraints involving the nature of the package itself. It would need to be determined whether the inventory gets sent directly to the final depository, if it were even possible; which depository, if so; or whether it accrues holding costs in a warehouse to be batched with other inventory going to the same destination.

  4. The final phase entailed a reverse order of Step 1 above. A team of couriers or freight truck drivers would deliver the package to the final destination, requiring an optimal route and batch selection in order to minimize costs.

  5. The time frame for delivery for each of the transport orders needed to be prompt, some more prompt than others.

How would one go about setting an organizational policy to minimize costs at all times? Of course, we suggested minimizing costs with some instantaneous solutions, such as digitizing the forms required to be filled out at each transport node once liability of the inventory had been ceded to the next in line, and automating this process altogether via a system of RFID tags and scanners. The latter option, however, to our knowledge, was not implemented.

Still, the bulk of cost reduction lay in setting a transport policy that continuously delivered the most optimal route for each package in terms of capital expenditure while sticking to the temporal constraints. When dealing with complex problems, one of the first steps to determining a solution is to translate the problem into a simpler one; one that is easier to wrap the mind around. This is where the TSP came to mind.

Traveling Salesperson Problem

Suppose a salesperson was required to visit every major city in the US to peddle their product. They would, of course, wish to minimize the distance they travel, saving on gas, car maintenance, and time. This problem may easily be modeled with an undirected weighted graph where the nodes represent cities and the edges represent distance.

'The TSP may be modeled as an undirected weighted graph.'

At first glance, this appears to be a trivial problem given the power we associated with computers. Simply find each permutation in routes traveled and select the one which yields the lowest distance. I remember attempting this challenge for an algorithms assignment I took in college. As soon as I ran the program, my RAM and virtual memory would fill up entirely, pounding in the notion that such a brute force approach would not work on a standard computing node. I tried running the program a few times and checking my source code before I realized the futility of the situation. Even brute forcing on the supercomputers my school owned would be impossible. However, for smaller problem sets, finding the shortest route would be possible and quick. From what I recall, the largest amount of cities that I could process via my source code on my laptop was 19. With each additional city added, the number of permutations exhibited factorial growth, causing extreme burden on the processor and memory.

A “good enough” solution to this issue uses one of several cost-minimizing heuristics. A heuristic is a practical means of addressing a problem. It may never determine the shortest path (cost); however, it may get extremely close, all while being implemented on cheap commodity hardware. The alternative brute force method can be extremely expensive for large enough problem sets, and even impossible for many business requirements with our current hardware capabilities.

For example, there exists a heuristic called the nearest neighbor. It is a greedy algorithm, meaning it is myopic in its iterative decisions on which node to visit next. For example, it will always suggest the next closest not-visited city to the salesperson. This algorithm often does not yield a solution within 20% of the optimal value. However, it is a good introduction into combinatorial heuristics.

Another heuristic, the ant colony algorithm, mimics the behavior of ants to arrive at a solution. This is an artificial intelligence approach, in particular, swarm intelligence, where a set of agents with relatively low processing power and memory will work together as a team to accomplish amazing feats. Whenever ants in search of food come across some obstacle, say a rock, some will travel to the left and some will travel to the right. During this process, they will lay down a pheromone trail which evaporates at a set rate. The shorter path will hold higher concentrations of pheromones, as the shorter time it takes to find the food and bring it back to the colony will be traveled more times by an ant. Unlike the nearest neighbor algorithm, the ant algorithm yields solutions that are very near to the optimal route.

'The behavior of ants foraging for food may be mimicked in combinatorial optimization to yield near-optimal solutions.'

There are hundreds, if not thousands, of other heuristics that may be applied to provide some solution that will suffice. The important thing to take away, however, is that for large problem sets, there is a sacrifice in finding the absolute shortest route so that a solution may actually be determined with cost-effective computing resources.

Conclusion

Although we cannot disclose the solution we provide to our client, we do reveal a particular way of viewing combinatorial problems related to business optimization. The complexity of the problem described in the first section was translated into a TSP-style problem from which we determined the most appropriate heuristic to cut costs for our client. As our client previously used an algorithm similar to the nearest neighbor, we had significant room for improvement in cost-reduction via our recommendation, and we yielded over a 10% decrease in variable costs.

It is relatively easy to see how the initial and final delivery stages involving the couriers could be transposed into a standard TSP problem, except one involving several agents, as the problem statement is almost the same as that of the TSP. The intermediate steps were a bit trickier, however, we used a similar paradigm to increase efficiency.

Many business problems may be thought of in terms of the TSP. Any time there exists a variety of options for inputs and how to handle them prior to their release into the outputs market, yielding an immense magnitude of permutations, the TSP should come to mind. The initial investment in building such a system is often paid off in a relatively short time frame and the continuous costs with such a system are lower than prior operating costs and/or opportunity cost.

Of Mercenaries and Soldiers; In-House Versus Third Party Tech Teams

This article is intended to discuss the nature of developing technology in-house versus utilizing third parties and some associated key hazards to tech entrepreneurs and investors.

“Mercenaries and auxiliaries are useless and dangerous; and if one holds his state based on these arms, he will stand neither firm nor safe… The fact is, they have no other attraction or reason for keeping the field than a trifle of stipend, which is not sufficient to make them willing to die for you.”

The Prince, Chapter XII - Niccolo Machiavelli

Historical Allegory

In a mountain pass just northwest of modern day Beijing in 1211, Jin Dynasty forces were preparing to hold out against a swiftly moving Mongolian army led by Genghis Khan. By this point in his life, Khan had unified the vast Mongolian tribes into a single empire that was spreading its borders with legendary brutality and unfaltering discipline. Wanyan Yongji, the Jin Emperor, however, was confident the Mongolians were no match for his forces. He had gone so far as to build a roughly 200 mile long wall, referred to as the “Jin Dynasty Great Wall”, and was paying the Ongud, a Mongol tribe, a handsome amount to secure its northern border.

'The Battle of Yehuling serves as an important lesson on the use of in-house versus third party teams.'

As the Mongolians advanced south, hugging the Great Wall, the previously immense Jin forces seemed to instantaneously shrink as the outnumbered Mongolian forces increased in numbers. Those that remained pledging allegiance to the Jin Emperor were confused as to what was proceeding, and then were swiftly murdered, staining the ground adjacent to the Great Wall dark red with their blood. After breaking through these defenses, the Mongolians rested one month and moved on to the capital city after rapidly winning two more battles on their journey south.

This leads one to question what happened at the Great Wall that would render it so useless against attack, why the Jin forces apparently dwindled as the Mongolian forces swelled, and how the Mongolians had stockpiled enough resources to immediately confront the Jin army after this first victory. Let’s look at some background leading up to these events. As the threat of the nearby Mongolian Empire reached the central city, Emperor Yongji decided to build a wall that would slow down any potential invasion and provide the Jin an advantage in resources and position. To help guard the Wall, he paid the Ongud tribe an immense amount to gain their loyalty. For about 10 years, he would regularly reimburse the Ongud, who were all too eager to collect their money, and smugly reveled in the peace that his work had seemingly provided his Empire.

Unknown to him, the Mongolians had been fostering relations with the Ongud and several other mercenary groups. Because of their good standing politically among one another, the Mongolian Empire had been stockpiling resources near the Great Wall for several years in preparation for battle. The Jin mercenaries were not concerned with these actions of the Mongolians as long their hefty collection from the Emperor was not threatened.

Once Genghis Khan decided to strike, the mercenaries, who had heard great stories of his army’s power, immediately defected to his side and helped slaughter the Jin soldiers at the Great Wall. The magnitude of the victory was compounded by another mercenary messenger sent by the Jin, who defected as well, and revealed all of their positions. Their swift descent into the Jin imperial city was then facilitated by the years of stockpiled resources the Ongud had allowed them to amass.

The ending of this story should add further weight to this lesson. Because of the immense amount of supplies they still had remaining, the Mongolians and their acquired mercenary army were able to siege the imperial city residents. Within the city walls, Emperor Hushahu, who had acted upon the newly-formed disenfranchised attitude within the empire against Emperor Yongji and assassinated him to gain command, and his citizens were starved out for four long years. As the food supply within the city walls depleted, grass, bark, rodents, and eventually fellow citizens took the place of more standard nourishment. The once great empire in this sorry state had no choice but to surrender to the lofty terms of Genghis Kahn.

Lessons for a Technology-Enabled Business

Although the defeat of a technology-based based business does not (typically) end in evisceration and decapitation as the history presented above, it does hold the capability to devastate the lives of both the founders and the employees.

Let us identify some of the parallels between the Jin Dynasty, or any empire for that matter, and a tech company (they are the same task ultimately):

  • Certain tasks that may jeopardize the entities exist
  • There is always a competitor looking to usurp your position
  • The role of the leader is to always defend against both external and internal threats while simultaneously growing the entity to provide for the citizens/employees
  • A contingency plan is mandatory

As Emperor Yongji failed to acknowledge how central defending the Great Wall was in his role to protect the empire, most technology-based startups we have encountered have failed to realize how central the actual technology in their business plan is to their success. Over half of the companies that approach us have started their journey by looking for a deal, typically from the cheapest software provider possible. Although the software does get completed by these cheap third party entities, often outside the agreed upon deadline and for a much larger budget than proposed, it is more times than not riddled with bugs, clunkiness, and worst of all, written in such a way to incur maximal future technical debt.

Technical debt refers to the notion in software engineering that non-modular, non-commented, and poorly designed source code will incur a very real monetary cost to the business at some point in the future when it is looking to fix, modify, or evolve the software. In contrast to Genghis Khan’s meticulous preparations for a future battle, the business leaders who elect to rely on cheaply created source code, the heart of their business, are setting themselves up for a major hurdle.

It should go without saying: if you are a tech company, meaning your revenue relies on some underlying technical system, extreme effort should be placed into ensuring you are getting the best technology to suit your purposes. Even if you are a non-technical founder or decision maker, it would still be wise to familiarize yourself with the overall topics related to your requirements. You should know the benefits of modular code, how your source code will be stored and transferred to a repository and which one, what language / platform is best for your needs and why, how the software ownership will be structured, and so forth. This does not mean that a more expert consultation cannot deliver a better approach; however, it provides you with a solid footing on which to perform your due diligence in vetting a technical team.

'Outside forces are always looking to devour you.'

As Genghis Khan was looking to usurp the Jin Empire and seek vengeance for their slaughter of his ancestor Ambaghai Khan, every startup under the sun is facing similar external threat. Just as how the mercenary messenger defected to the Mongolian side and gave up key Jin intelligence, the same concept is enacted every day by third party tech teams who will build the critical infrastructure for another business and then go on to form similar companies or get paid to implement it for the competitors. The de facto response to this is to sign a Non-Disclosure Agreement; however, in reality, these are almost never enforceable in court. It is next to impossible to prove that an entity did not come across proprietary information from another source or think of it on its own prior to the agreement. Either way, once the information has leaked, if it is of any value, this means that a previously established stronghold is soon to be eroded. Further, the trend in the industry is to provide Software as a Service, SaaS, meaning that the client who had purchased the creation of the software does not actually own it or even have access to the source code; rather, they have a license to use it. This often impedes future growth and plans as the company has now ceded great power to a third party.

Touching upon the final two points of leaders needing to always defend and grow their venture, the most compelling argument of cost will be brought up. Anyone who has purchased the creation of software or looked into getting software created knows that it is not a cheap process, especially if you are looking at getting a quality return. Now let us take on the role of a tech-based company that previously had software created by a third party. Keep in mind that the third party was most likely trying to cut all corners to drive up their effective hourly rates by building software laden with technical debt. Once an issue arises from this platform, the company contacts the third party team and pays them another large sum of money, the magnitude of which is directly proporational to the original technical debt the team embedded in the prior deliverable. Then, let us say the company has a novel idea for disrupting their market space and wants to build this into their software. They will once again contact a third party team and pay a large sum of cash which corresponds to the incurred compounding technical debt and the new requirements in order to achieve their vision. Thus, the company’s upper management team is doing little to defend against an outward flow of capital, the possible leaking of secret and proprietary data, and technical malfunctions, and they are stifled in growing their business at a healthy rate by the obstacles they have brought upon themselves.

Proper Approach to Developing Technology

The solution to the quandaries mentioned above is actually quite simple and has been used in growing successful entities throughout history: utilize an internal development team that has stake within the success of your company.

'A system where everyone is reimbursed properly for their value added yields the most profitable results for all.  The tech team that provides the most critical infrastructure in enabling your business model should not be a third party but rather a partner.'

By financing your project with a combination of both equity and capital, you have essentially purchased an internal tech team. Because the equity lowers the amount of upfront capital you would be paying otherwise, further immediate financial risk is mitigated. Also, because operating capital will be higher, this allows the business to grow quicker and tackle new challenges more effectively.

In this case, however, there may be greater upfront temporal investment in order to sell your company to the tech team at the highest valuation. This requires meticulous attention to detail in your business plan and forecasts. This extra step, which is actually often the first step most successful founders undertake, is often neglected; yet, it is critical from even an operations viewpoint for a healthy business.

Overall, there are few, if any, downsides to this strategy while the benefits are vast.

Lesson to Tech Investors

Veteran tech investors know to look at how management is utilizing their human resources. However, we have encountered several novice investors that have previously gambled away and bet on third parties to deliver.

Literally, every competent investor we have met focuses on the stake that critical company figures have and how much sweat equity they are willing to deliver. A third party tech team will provide little or no sweat equity, which not only hinders your technical and business development, but your financing options when and if capital needs to be raised for growth. No serious investor will consider financing your company when they may as well just purchase the third party tech team themselves to deliver the same product or service.

From Concept To Profit

A Guide For Tech-Enabled Entrepreneurs On How To Execute

A lot of people have great ideas that could be turned into viable businesses, yet most of these ideas are never executed and remain in pure abstraction. The gap between conception and profitability is marked with various uncertainties which make the apparent risk to reward ratio even less appealing for those who would otherwise plant the first step in the tortuous, but extremely rewarding, path of entrepreneurship. This article is written for the purposes of describing how INTP can help execute your idea and see it come to fruition.

'Ideas are cheap.  Execution is where the value lies.'

Know Your Market

The first step towards execution is knowing your market. What is your value proposition and how will you differentiate yourself from your competitors? Do you have an idea that’s tailored for average consumer use or for business use? If the end goal is an idea that’s suited for both, it would be best to determine which route should be the first one taken in order to demonstrate the POC (Proof of Concept). Typically, the entrepreneur should pick the business route to implement first due to a lower requirement for startup capital and a more ideal beta testing environment.

From here, the exact use cases for the idea need to be established. Who will use this product? How will they use it? What is the value added for the end user? How will profit be generated? Be able to distinguish between the core functionality of the product and features that simply extend the core functionality. How will you explain your product value added to key users in one-two sentences?

Many times, it is useful to have a storyboard of the end user, even assigning names to particular use cases. For example, Bill owns a manufacturing business and needs to easily view all incoming, warehoused, and outgoing inventory with an intuitive User Interface (UI). Setting up the use cases in this way allows INTP to more easily produce backend UML flowcharts that outline the logic of the software as well as frontend wireframes that model how the UIs will look.

What is your value proposition to your target consumer?  What are you offering that your competitors are not?

Prototype & Alpha Test

Once most of the upfront logistical issues have been determined, you are ready to begin implementing a prototype. Whether you have a consumer based product or a business based product, you will need to develop an MVP (Minimum Viable Product), or a low cost iteration of your idea that you can present to banks and investors, if it’s a consumer product, or beta clients, if the end users of your product are businesses. Many entrepreneurs have obtained capital without first having a prototype, however, their WACC (Weighted Average Cost of Capital) is higher, as they have presented investors with greater risk. By having a prototype and a POC through beta testers, in addition to a well-drafted business plan, you will be demonstrating to those willing to fund your venture that you know how to manage funds and implement your ideas, eliciting a lower risk premium. INTP can help by creating this business plan and prototype for you.

When we create your prototype, we develop the most cost-effective solution that demonstrates the core functionality of your idea. We have developed enough experience through countless projects to know how to implement the MVP in such a way that it is rapidly developed yet still easy to grow and develop in the future. The phrase that refers to this concept is Technical Debt and it translates directly to financial implications, as the most expensive portion of any software engineering project is the developer time.

The final step before releasing your prototype to the initial beta testers is QA (Quality Assurance). Our team goes through the software’s use cases and determines that everything functions according to plan. This is called Alpha Testing and is an important component to user acceptance of your prototype.

Beta Test

Beta Testing is the process of opening your prototype up to an initial customer base to gauge their reaction to your product and to work closely with them in polishing your product towards its initial production version. Regardless of your idea, you need your end-user to LOVE, not just like, your product, and beta testing aims to accomplish this.

For this stage, INTP can help identify these ideal beta customers and help you approach them in such a way that they are eager to work with you. Once they have become your beta customers, we can represent you as your technical team and refine your product according to customer feedback.

Getting your beta test clients to LOVE your product is a key step towards user acceptance in production.

Even if your idea is a consumer product intended for mass consumption, we always advise to implement an initial beta testing. This will help you determine market sentiment towards your idea without needing to risk large amounts of capital on going directly to a production version. When you do appeal to lenders and investors for funding, if you are not funding the project from your own means (bootstrap funding), you will be able to eliminate profitability ambiguity in relation to market sentiment and your allocation of funds, and thus, will receive more ideal financing options. INTP is able to generate technical and financial reports for your business that you will be able to present to potential financiers.

Deploy & Market

When your beta test clients love your product, you will be ready for a public release. We do not recommend rushing through to production because user acceptance is critical to your success. Thus, careful diligence is performed in the beta testing stage to ensure that your beta test clients have their wishlist in regard to the product fully completed.

This stage of the software development may be marked with a large overhaul of the previous codebase so that features that were added quickly during beta testing may be more robustly integrated into the source code. Refactoring and rewriting the code base before this point typically ends up being more costly and inefficient. Thus, incurring little technical debt during the MVP phase and paying it off later before production can actually save you time and money while minimizing your market risk.

Also in this stage is the time where our graphic design team will create the release-version graphic assets. Graphic design can be expensive and the INTP team does not feel it is a necessary risk to undertake before your product is ready for production. However, your product up to this point will be nicely styled and presentable still. These digital assets will additionally be used as marketing material on the Play and App Stores as well as any other landing pages that will represent your brand.

The landing pages described above will be used to funnel traffic and collect market information and most often consist of a website and various social media platforms. Each landing page offers something unique to the end user in hopes that a single user may subscribe to multiple platforms. Our internet marketing team has found that SMM (Social Media Marketing) and especially subscription-based marketing yield the highest conversion rates. INTP utilizes app push notifications as well as the traditional form of email marketing for our subscription model. Our data science team analyzes the data procured by the marketers and determines specific target demographic groups and general protocols of marketing to them that will generate the highest conversions. By default, the steps outlined above lead to the greatest SEO (Search Engine Optimization) and real estate utilized by your listing on the results pages of the major search engines.

Getting Started

Many aspiring entrepreneurs are discouraged to go down the route of actual execution because of various unknowns that linger in their mind. Sadly, a lot of their great ideas end up as perpetual what-ifs and disappointingly missed opportunities. Without knowing exactly what are the steps to production and how to appropriate finances, those who do take the first steps end up quitting in frustration having spent large amounts of capital following a loosely constructed plan of execution.

INTP can bring your idea to profitability.

At INTP we function as an on-demand supplementary technical team to enable entrepreneurs. We provide business feedback and architecture advice based on our experience, have the technical expertise to provide completely custom software and the marketing experience to drive the final product into profitability. Our mission as a technology company is to drive our customer’s success at every stage of business and we would love to talk with you about yours.

For a free consultation on how to get your idea from conception to profitability, please feel free to call us at (303) 902-6422, email us at info@intp.io, or fill out our contact form at www.intp.io/contact. We look forward to working with you.

So You Want An App Built

This article is tailored towards those wishing to get a Smartphone app developed and will go over the available app architectures, whether you want to build it natively or via a platform-agnostic solution, how apps are priced, and lastly, how you can contact us for a free initial consultation regarding your app idea. The intended audience will typically include entrepreneurs wishing to implement their ideas onto Smartphones or businesses looking to interact with their clients in new ways and establish their presence on the app markets.

App Architectures

The three types of app architectures available to chose from are Stand-Alone, Client-Server, and Peer-To-Peer. Each of these has their own advantages and limitations.

Stand-Alone Apps

Stand-Alone apps are the simplest paradigm and only involve the end-user’s Smartphone. The functionality of these apps does not include communicating with other users or remote servers. A good reason to develop this type of app would be if you wanted to display information to your users as well as establish a presence on the app stores. Another term for this type of app is “vanity app”. The pros of this app include an increase in marketing and legitimacy by having a place on the official app markets whereas the cons are that the user cannot communicate to others or your central server. Because of the lack of back-end logic and developer time required to complete these apps, the cost of obtaining a vanity app is the least among the other architectures.

Client-Server Apps

Most commonly, apps utilize the Client-Server model, as demonstrated in Figure 1. In this scenario, the technologies utilized are the Smartphone end-points, a central server, and a database. The database is most often at the core of any app and holds many tables that store the app’s data. These tables are devoted towards log-in information and any other information relevant to the app.

Figure 1 - Client-Server App Architecture

From the Figure above, we see:

  1. The Smartphone makes a request (very often an HTTP request, but any network application-protocol may be utilized) to the server, indicating it would like to perform a CRUD (Create, Read, Update, Delete) operation.
  2. The server processes and handles the request from the Smartphone client and queries the database on behalf of the user.
  3. The database responds to the server’s request and sends back the desired data set to the server.
  4. The server then processes and handles the response from the database and then sends this information back to the client

As apparent from the description above, the Smartphone app essentially acts as a User Interface, the server as an intermediary between the app and the database, and the database as the information store. The primary portion of the logic resides in the server and database, rendering the app itself a ‘dumb terminal’ processing little to no logic and simply relaying users requests, however, for some special purposes, the app on the Smartphone end-point can be designed to take care of a majority of this logic.

In apps processing big data or utilizing a significant network bandwidth, multiple servers and databases can be deployed to handle the workload. Because of this, the Client-Server model is scalable, although not more so than that of the Peer-To-Peer model.

Peer-To-Peer Apps

Peer-To-Peer apps are those that communicate directly with other Smartphones in their vicinity, leaving the centralized server-database out of the picture and instead implementing a distributed approach. Refer to Figure 2 for a pictorial example. Because of their decentralized nature, these apps are quite scalable since it’s the Smartphone end-points themselves that store all of the data and process all of the workload.

Figure 2 - Peer-To-Peer App Architecture

For app ideas involving user interaction within the immediate vicinity, the Bluetooth and Near Field Communication protocols may be utilized. For those involving Peer-To-Peer interactions at a distance, the WiFi P2P protocol (or similar) is likewise used.

These are the least common of all apps, however, they are an excellent selection for scalability, as the cost of additional resources is not incurred by purchasing extra RAM, disk space, or network bandwidth, but rather by the systems running the app.

Hybrid Apps

For the sake of completeness, it must be mentioned that a hybrid approach between the Client-Server and Peer-To-Peer paradigms is possible as well. With this approach, the central server-database can keep tabs of whatever it wants going on in the app realm, leaving the rest of the work to be done among the peers’ Smartphones themselves.

Native Versus Platform-Agnostic Development

Another consideration for those wishing to develop an app is whether they want the app to be native or platform-agnostic.

Native Apps

Native app development includes the creation of separate source code for the platforms on which the app is desired to be published. For Android, this would include source code in Java (or any other language that is compiled to Java bytecode and run upon the Java Virtual Machine), Swift for iOS, and C# (or any other language that is compiled to the intermediate assembly language recognized by the Common Language Runtime).

These apps are more pricey than the platform-agnostic counterpart, as development time increases with each additional element in the set of source codes to be created and maintained, but they are necessary if performance is crucial for the app or if intricate features are to be implemented.

Further, native Android apps can “kernel hack” the underlying Linux Operating System, allowing them to interact directly and uninhibited with any part of your Smartphone. This exposes powerful potential for Android development but is not possible on iOS and Windows Smartphones because of their proprietary and closed-source nature.

Platform-Agnostic Apps

Platform-agnostic apps are created with one code source that can be ported to Android, iOS, and Windows Smartphones. This brings the price of the app lower to that of the natively developed app. Yet, because platform-agnostic apps work within the context of a WebView, basically a browser within the app, the features that are able to implemented are less than those of the native application.

How We Determine Pricing

At INTP LLC, we like to provide the best solution at the best cost to our clients and we always utilize technologies that will best fit your needs while incurring minimal future maintenance, also called ‘technical debt’. Because of this, we do not provide a generic “per-hour” pricing or a one-size-fits-all project fee. Rather, we conduct an initial meeting upon which a full Software Requirement Specifications is created. From this, we are able to provide a figure based upon forecast development time, hardware resources to be utilized, and the amount of maintenance to be involved.

The equation below outlines how we determine the pricing of your app:

Price = Developer Time + Designer Time + Resources Utilized + Recurring Maintenance Cost

Free Initial Consultation

For a free initial consultation regarding your app idea, please feel free to contact us online via www.intp.io/contact, email us at info@intp.io, or call us at 303-902-6422.

How Sitemaps Boost Profits

One of the pages every website needs is a properly created sitemap. The sitemap is a page that tells search engine crawlers how to index your pages. If search engines feel the pages listed in the sitemap have distinct and important content, they will often display those pages as part of your listing in their search results. To check if your site has one, go to:

http://your-site-domain/sitemap.xml

A well-formed sitemap not only increases your SEO (Search Engine Optimization) ranking in order to place you higher on search results, but also as important, it can increase the amount of space taken up by your listing. The real estate on the search results is valuable, as evidenced by the amount of money businesses are investing in establishing a better presence on it, due to it being a large factor in client exposure.

Take a look at the example below, which I found while searching for places that perform brake checks:

Sitemap Example

Now compare that to the business listing below:

Sitemap Example

The top listing is a lot easier to spot due to the space it takes up on the page. Even if it were placed below the bottom one, it would still grab the attention of a potential client and increase the amount of traffic flowing to its corresponding site, in turn increasing the business’ profit.

Live Video Stream Software Unveiled

INTP LLC has unveiled new software for our client, the Society of Petroleum Resources Economists, on March 5th, 2015. By navigating to a subdirectory from the SPRE’s domain at www.SPREcon.org and inputting their names and emails, members from around the globe were able to view and interact with speaker Carl Larry of Frost and Sullivan via a live-video stream and text chat.

Screenshot of Video-Broadcasting Software

However, because the chat feature was not utilized by the viewers as intended, a moderator will be tasked to increase the goal of greater interactivity for subsequent events. Also, a reminder that Q&A is encouraged will be placed at the top of the chat screen to spur such communication.

General upgrades for further events will include greater styling of the User Interface as well.

The Tech Set-Up at the SPRE Event

A Brief SEO Overview

What should words like ‘Panda’, ‘Hummingbird’, ‘Hilltop’, and ‘Edgerank’ mean to those wishing to grow their business? This article aims to provide basic understanding of them and their effects on profit margins.

Panda

General

Search Engine Optimization, or SEO, is a key focus among many organizations wishing to increase quality online traffic. The phrase ‘quality traffic’ indicates users who will purchase the good or service that is being sold or interact with the site in the intended manner. It is a continuous process where the goal is to provide the best possible web experience for human users as judged by crawlers, or scripts that gather information for search engines to determine how to rank sites based upon a variety of keywords. Thousands of search engines exist, however this post will be focusing on the most popular one that handles roughly 70% of all searches in the United States: Google.

The ranking algorithms of the large search engines, particularly Google, are proprietary and the exact nature of the variables involved and associated weights are unknown. Still, a decent amount of how they generally work is available and further data may be gathered empirically, or through trial-and-error. The most important public knowledge, however, is that the search engine’s goal is to rank sites based off how much of an authority in their field users will view them.

What is known are four major factors that affect site placing:

  1. Content
  2. Inbound and outbound links
  3. Social Graph Optimization
  4. On-site factors

Content

It has been said that ‘content is the currency of the Internet’. Content implementation thus holds the greatest weight for rankings. In the past, one merely had to place a bunch of keywords near the top of the page to benefit from the search engines’ content algorithms. Soon, web developers were placing duplicate, not fully relevant, and sometimes hidden content in a contest against their competitors’ clients to reap the most gain for their own clients. None of these strategies actually enhanced the experience of the user and some actually decreased it. Enter Panda; Google’s algorithm for boosting original, relevant content and penalizing content implementations that were simply trying to find workarounds to the top of the search pages instead of providing more value to the user.

Along those same lines, Google realized that the queries formed by many people, especially those using the microphone to input search text on their mobile phones, were in the form of a question. This furthered the decline of the ‘keyword stuffing’ agenda by taking context into consideration. Thus, Hummingbird was born; a radical stepping stone in redefining the role of the browser.

The original PageRank algorithm upon which Google was formed determined the authority of a site based upon ingoing and outgoing links. If a site had X links pointing to it, then X+1 links would surely provide it a higher rank. Soon, web developers looking to exploit the system set up link farms pointing to the site from shell sites, among other illicit black-hat means. Using its big-data capabilities mixed with its cutting-edge machine learning, Google began penalizing these link farms, sometimes by even removing them from the search engine altogether. Nowadays with algorithms such as Hilltop, links coming from valid expert sites contribute to site ranking and those coming from shell sites and farms decrease ranking.

Social Graph Optimization

The third factor mentioned in a site’s ranking was its Social Graph Optimization, or SGO. A large portion of communication now takes place over social media via Liking, Sharing, Tweeting, et cetera. It should come as no surprise then that the activity a site receives on platforms such as Facebook and Twitter contribute towards its ranking. The amount of attention a site receives over these media is a function of the number of people to which the post is served. This, in turn, is a direct function of algorithms such as Facebook’s EdgeRank, which determine how often and to which peoples’ feeds to serve the content. Once again, developers and advertisers soon began creating fake accounts from which to Like and Share content in hopes of boosting exposure. The social media platforms are quite intelligent however, and like the search engines, implemented a method for determining whether the activity was coming from real accounts while penalizing that which was determined to be coming from fake accounts.

On-Site Factors

The final section contributing to SEO is on-site factors. A site that loads slowly or serves content in a less-than-ideal manner is a pain for the user and thus, search engines began factoring this into their ranking algorithms. From this, techniques such as caching statically served content and minifying source code became the standard for sites wishing to experience a greater ranking. Another standard that arose from this is responsiveness as the default; a site’s stylesheets should include media-queries for any size of screen from mobile phones to large-screen Android TVs. Also, be sure your site serves a sitemap and valid robots.txt for the crawlers.

Summary

A key takeaway of this article should be that there are no ‘hacks’ in getting top search engine results immediately. Quality content and user experience are required to increase rankings. While there are illicit means of obtaining a better position, these are (almost) always caught and penalized such that any short-term gains turn into expensive long-term liabilities. SEO is an on-going and constant battle against the competition in the same sector that is won via continuous long-term strategy and adherence to the rules set out by the search engines.

INTP LLC Now Has A Blog

Welcome to the INTP LLC blog! We will be posting our musings here, particularly regarding general overviews of technical concepts and their relation to business profit margins.

Be sure to check here often and if you like or learn anything new, please feel free to share it.