A Software Development Philosophy

A software development philosophy guides every developer throughout their career. Some adhere to prevailing philosophies that are written down in books or written down in code. Some invent their own beliefs based on their view of the software world.

My philosophical influences come from the likes of Domain-Driven Design, Test Driven Development, the Agile Software Development Lifecycle, and a multitude of others.

The following prose is an account of my software development philosophy.

Invest Lightly

When using distinct technologies, I try not to get wrapped up into the details. Technology changes constantly; light strategic investments provide more long-term value. I invest lightly with my keystrokes, so I have less code to maintain down the road.

Stay Liquid

I avoid technical debt like the plague. A technical debt mountain is no laughing matter; it will threaten the longevity of your business. I recommend paying the debt now, so you don’t pay more later. It’s frighteningly common to see million-dollar software development projects get scrapped and built for millions again. It’s more often cheaper to pay technical debt than it is to rewrite software.

Everything changes

When building code, I assume change is coming and sooner than expected. I craft solutions that are easy to change in the future to more economically address new business requirements.

Simplify Security

I create layers like bastion hosts to protect data rather than using software configuration files and leave configuration files permissive to reduce confusion about access rules. I often remind myself that the only secure computer is a dead computer, and you better be sure it’s dead.

Use the Right Tool

Languages are simply programming tools that exist to help us arrive at a solution. I think clearly about which tool will best handle any job.

In software development, many people cling to one tool and master it. It can be a great approach to master one tool; it will pay off big for some developers. I prefer to learn many tools and learn them enough to be productive.

follow the 80-20 rule when learning new tools so I can quickly get up to speed with its use cases.

Use Cases

It is of paramount importance to limit the use cases of software. Command-line utilities are easy to restrict use cases with while user interfaces are not. I do not attempt to handle every use case to avoid introducing a diverse collection of bugs. If a solution is fulfilling too many complex use cases I consider creating a new solution.

Writing down a clear definition of use cases sets a clear contract between myself and users. When people use software outside of an intended use case I use documentation to help determine if I have discovered a valid use case.

If I am creating general-purpose software, I limit the use cases as much as possible to general-purpose features that can be extended.

Versioning

When building software, my philosophy is to version most things. I keep versions pinned, so upgrades don’t happen without a developer’s knowledge and seek a balance between keeping software up to date for security reasons and letting sleeping dogs lie. I abide by the rule; if it’s not broke, don’t fix it.

Naming Things

When naming variables, I name them in ways that make it easy to rename them. I call things exactly what they are and avoid overloading or overusing the same variable name.

Tracer Bullets

I make heavy use of tracer bullets to punch a hole in the walls between myself and a solution. I’ll pick a requirement that exists far away from the end solution and meet it head-on. I’ll use scattershot shotgun methods to prove use cases, theories, and the viability of a solution before investing time and money in building robust solutions.

Driving Business Value

Everything I build has the end-user requirements placed front and center in my mind. Having a fancy contract with a lot of complexity won’t help sell more products than a simple one. If I can’t rationalize why a feature provides the highest value at a given time, I’ll raise my hand or simply change course on my own.

Single Responsibility

I keep software components isolated with a single responsibility and create user interfaces that have a single responsibility to reduce complexity. I keep tools divided from one another in a way that is logical for users and other software developers, and I follow SOLID principles when creating object-oriented software solutions.

Be Empathetic

I work diligently to understand what people need to do with the software I build. I ask why five times to understand the real problems and keep the big picture in mind. Empathy for people is a cornerstone of a robust software development philosophy.

Step Slowly

When dealing with difficult problems, I step slowly through the problem. I’ll make a single change then step through a debugger to see the results. Slowly stepping through problems helps me isolate them and solve them one at a time.

Automate Everything

Automation

I automate tasks religiously. If it’s hard to automate something and it will cost a fortune to automate, I consider a different solution.

Decouple Everything

When tight couplings exist for no good reason, I will decouple them. I summarily dismiss software libraries that aren’t dependency injection ready. I refactor tightly coupled code to reduce complexity and prepare it for change.

No Perfect Solutions

I know that picking the best solution, at the moment, with all the facts laid out, is the way to avoid analysis paralysis.

I don’t believe in the Nirvana fallacy and make the best decision possible at The Last Responsible Moment, yet at the same time, I take big architectural choices very seriously.

I know that solutions can’t be fast, cheap, and good at the same time yet strive for the center of all three.

A Software Development Philosophy Evolves

My software development philosophy is evergreen. It changes after learning new things. This account of my development philosophies is nowhere near complete, and it never will be. As always, a great magician never reveals all his tricks.

I refine my software development philosophy regularly in the spirit of continuous improvement.

Develop your own philosophy

Here are a few books that will help you develop your own software development philosophy.




How to highlight text on a web page

Google Chrome

How to highlight text on a web page with a few effortless steps.

How to highlight text on a web page with a few effortless steps.

Google is working on a web standard for this feature. If it gains traction, you may see this feature in other browsers in the future.

Install the Extension

Install the Link to Text Fragment extension in Google Chrome if it is not already installed.

Highlight Text

Navigate to a web page and select the text like this.

What is The Answer to the Ultimate Question of Life, The Univers, and Everything?

Try It Out

Paste the generated URL in your web browser, and you should see the highlighted text.

42

The generated URL looks like this.

https://en.wikipedia.org/wiki/The_Hitchhiker%27s_Guide_to_the_Galaxy#42,_or_The_Answer_to_the_Ultimate_Question_of_Life,_The_Universe,_and_Everything:~:text=In%20the%20works%2C%20the%20number%2042,Lost%2C%20Star%20Trek%20and%20The%20X-Files.%5B87%5D%5B88%5D

The browser simply takes the text and highlights it based on the text input.

I’ve wanted a feature like this for quite a white. The main downside, which requires a versioning standard for web documents, is that the highlight is invalidated if the text changes. The upside is that the specification accounts for that and will still jump to the section the content was previously. If the page structure changes, all bets are off, and the feature will not work.

I find the feature useful for linking to text that doesn’t often change if at all.




SignalFx and EC2

SignalFx

SignalFx and EC2 are a match made in heaven. If you need to ensure the uptime of your EC2 instances monitoring tools like SignalFx is a must-have.

Follow along for details that will help you use SignalFx to monitor an existing EC2 instance.

If you have different EC2 & SignalFx integration requirements see this documentation.

Prerequisites

  1. An existing EC2 Instance, Amazon provides a free tier
  2. A SignalFx account, 14-day trials are available on signalfx.com

Setup

Login to your SignalFx account then click the INTEGRATIONS menu item.

Press the SFx SMARTAGENT icon then the SETUP tab.

Follow the installation instructions then navigate to the INFRASTRUCTURE tab

Clicking the Host link will take you to a view of your EC2 instance’s resource utilization.

That’s all there is to it; now you can fire alerts when bad things happen on your EC2 instance.

Related Content




Learn Signal Analog

The Basics

Learn Signal Analog by doing. Signal Analog is a tool for generating SignalFx dashboards. It’s a Nike open source project on GitHub. My team is migrating from New Relic to SignalFx, and we have opted to use Signal Analog to create our SignalFx resources.

Installation

I ran these commands to add the signal_analog package to my project.

mkdir learn-signal-analog
cd learn-signal-analog
printf 'signal_analog' > requirements.txt
pip3 install -r requirements.txt

Creating a Dashboard

Let the fun begin. I decided to start by creating a grouped dashboard with this example. The dashboards.py file was added to my learning-signal-analog directory. I then narrowed the code down to represent a single dashboard.

Up went the code

./dashboards.py --api-key <MyApiAccessToken> create

In the UI for SignalFx, the working token is labeled API Access Token.

This is what success looks like.

{
  "authorizedWriters": {
    "teams": [],
    "users": []
  },
  "created": 1590460991067,
  "creator": "EYyXD0gA0AA",
  "dashboardConfigs": [
    {
      "configId": "EY58mxKAwAA",
      "dashboardId": "EY58lP0A0AA",
      "descriptionOverride": null,
      "filtersOverride": null,
      "nameOverride": null
    }
  ],
  "dashboards": [
    "EY58lP0A0AA"
  ],
  "description": "",
  "email": null,
  "id": "EY56poZA4AA",
  "lastUpdated": 1590460992006,
  "lastUpdatedBy": "EYyXD0gA0AA",
  "name": "Learn Signal Analog",
  "teams": []
}

The dashboard is present within the dashboard group upon creation.

I then made the dashboard more useful by removing the Network chart and adding a Postgresql query time chart.

Signal Analog is an excellent Python module that will provide you an infrastructure as code option for SignalFx.

Learn Signal Analog – Beyond The Basics




Learn SignalFx

The Basics

SignalFx

Learning SignalFx can be overwhelming. There are a multitude of ways to integrate it with your applications. Choice overload can quickly set in, to combat that I like to narrow in on a fundamental problem first. I would like to monitor a PostgreSQL database. The following content will walk through my journey to do so.

Facts

Metrics and Metadata

The SignalFx agent collects metrics and sends them to their servers. The metrics are made of the following metric types.

Counters

Counters are a straightforward metric that only takes integer values. They can count things like the number of errors that have occurred.

Cumulative Counters

Cumulative counters are scoped to the lifetime of a process or an application. The number of database calls since PostgreSQL started would be a cumulative counter.

Gauges

Gauges measure values over time. The percentage of memory a PostgreSQL server is using is an example of a gauge. The database memory fluctuates, and the gauge metric captures that fluctuation over time.

Metadata

Use metadata to filter, find, and aggregate the metrics you want to chart or alert on. An example would be an environment key-value pair.

environment:prod

The environment metadata will allow you to search by the environment within SignalFx.

Charts

Charts provide visualization of the metrics you send to SignalFx. A chart that shows the percentage of memory helps you decide when it’s time to upgrade said memory.

Dashboards

Dashboards allow you to group charts so you can get the big picture. We can group them within dashboard groups. We can’t learn SignalFx without talking about dashboards.

  • Built-In – Used to provide default dashboards for integrations like PostgreSQL
  • Custom – Any custom dashboards you create
  • User – Primarily used for isolated experimentation and visualization

Detectors and Alerts

Detectors consist of events, alerts, and notifications. They can trigger alerts and notifications based on conditions. It’s possible to chain detectors by triggering additional events and notifications after a detector fires an event.

The Container

I’ve chosen to instrument a Postgresql server and the container it lives in. These are technologies I’m comfortable with and can get up and running quickly. Using familiar technologies will allow me to learn SignalFx more effectively.

I’ve created a container that installs the Postgresql server and the SignalFx collectd per the advanced installation options.

I configured the agent.yml with these options from the Postgres monitor documentation.

Grab your Organization’s access token. This can be found from this location.

Profile -> Organization Settings -> Access Tokens

I chose to store the SignalFx access token in a file. See the SignalFx remote configuration documentation for configuration options and the agent.yml for more context.

The Dashboards

Every host has its dashboard by default. Now that the agent is running, we can open Signal FX and then choose the INFRASTRUCTURE menu item.

We’ll find our host running after data makes its way to the SFX API.

In addition to the host’s dashboard, there is a PostgreSQL dashboard provided by SignalFx. The dashboard offers a view of database level resources out of the box.

Poking with Sticks

I connected to the container in another terminal with this command.

 docker exec -it learning-signalfx /bin/bash

I ran these commands to do a read-only benchmark test.

pgbench -i learnsignalfx
pgbench -c 4 -j 2 -T 600 -S learnsignalfx

The event predictably created a spike in CPU, as evidenced by the default APM dashboard.

I created a detector for CPU utilization. I then reran the benchmark, and sure enough, an alert fired.

I then stopped the benchmark. Shortly after that, I received another email letting me know everything was back to normal.

There are other fun benchmark tests you can run with the pgbench command. Enjoy trying interesting detectors, charts, and dashboards on your own.

Learn SignalFx – Beyond the Basics

It appears that books are scarce for SignalFx. These resources will help with learning more about SignalFx.




Full Stack Developer vs Software Developer

Full Stack Developer vs. Software Developer, which is right for you? Let’s reframe the question.

To specialize or not to specialize, that is the question.

The software industry is not the only industry that faces this question. No matter what industry you’re in, it’s a loaded question. Like the answer to most queries, it depends. If you’re in a corporate setting, specialization is typically the norm, and within start-ups, it’s the opposite. In a small group, the specialty will impede a person’s ability to create value. In a large group, the value specialization brings has the potential to be off the charts.

I’m a generalist and have always had a propensity to chase the knowledge dragon.

Look! It’s a Knowledge Dragon in the wild.

This approach has worked for me, but it’s not without its challenges. I’m a jack of all trades, master of none. As a result, my imposter syndrome symptoms rage on. My specialist colleagues leave me in the dust on several topics. Fortunately, many people recognize the value of a multi-faceted perspective and will accept my lack of more in-depth knowledge of subjects. I can’t know everything.

Sometimes I consider deep diving with one technology and sticking with it. I then sweep that consideration aside when another shiny paradigm comes along. I’ve always loved learning big concepts that apply to all technologies. I’ve also enjoyed playing with new tools and breaking them ever since I was a child. I would disassemble radios, inspecting them, then put them back together again. I would try to fix them and sometimes with success.

Anyway, that’s enough about me, this is about you.

Should you be a full stack developer or a software developer?

Many businesses start with a specific product and specialize in that product. It’s usually a successful strategy until it’s not. It’s 2020, and Zoom is trending well over the past six months.

Specialization appears to have fueled most of their success. Zoom invested in a specific product with a limited set of applications. In the beginning, their product’s quality was low, but they’ve improved over the years.

Meanwhile, Microsoft is bundling Microsoft Teams into Microsoft Office, and it now has video conferencing. Microsoft is not a company that specializes. It has always focused on creating general solutions for broad applications. It remains to be seen if teams’ video conferencing will gain a foothold.

So why am I talking about companies and not you?

Companies are an excellent way to observe the advantages of generalization vs. specialization. We can correlate the outcomes that companies realize with either approach then choose the right path based on our goals. Start asking yourself some questions.

  • Do I want to start my own business in the future?
  • Does mastery bring me joy?
  • What will I do with a broad set of knowledge?
  • What about a deep set of knowledge?

This question is loaded and requires some introspection to answer.

If you’re building a team, finding a balance of generalists and specialists is likely the right approach. Without specialists, the hard to solve problems that require a deep understanding of software technology are impossible to solve. Without a generalist, a team might miss out on a perspective that will make a challenging problem go away altogether. The law of the instrument comes into play when we have tipped the balance too far towards specialization.

I suppose it is tempting, if the only tool you have is a hammer, to treat everything as if it were a nail.

Abraham Maslow – 1966

I once worked with a guy that used the SQL Server database for everything. He succeeded in tightly coupling the whole enterprise to SQL Server. This guy stored the entire user interface definition in the database. He was enthusiastic about the power of this paradigm. I wasn’t sure if he was joking, but he talked about creating a SQL MVC book.

On the other hand, knowing a small amount of everything isn’t useful. It’s important to deep dive into key concepts like design patterns, clean coding techniques, etc.

In the end, deciding between being or hiring a full stack developer vs software developer is not cut and dry. This decision depends on your goals, your organization’s goals, and the environment. Here is my take on the subject.

Full Stack Developer vs Software Developer

Which is the right choice?

Software development is incredibly complex with lots of nooks and crannies.

The Software Developer is already spread thin and must bear with a large cognitive load.

The Full Stack Developer bears an even larger cognitive load and may become overwhelmed.

Full Stack Developer

Pros

  • Can move from team to team
  • Variety is the spice of life
  • Learns to quickly learn new things
  • Brings a diverse perspective to problems

Cons

  • Cognitive load can get out of hand
  • Squirrel?!
  • Focusing can be a challenge
  • Might make the wrong decisions when using complex tools

Software Developer

Pros

  • He had one job, and he did it well
  • Can become the lynchpin that makes the plan come together
  • It’s easy to find a job if your specialization is in demand
  • Lower workloads bring better focus

Cons

  • Might miss the forest from the trees
  • Opportunities to save time and effort could be missed
  • If the specialization becomes obsolete, what now?
  • Overfocus is a thing. Are you missing an opportunity?

Conclusion

Like any comparison or solution, take mine with a grain of salt. The differences between these two disciplines can be vast or just a few frameworks away from full-stack.

It’s not easy to see or pick the right path. Whether you’re a developer or a company deciding between the two you have your work cut out for you. Do some soul searching and try to find the right balance.




Learn Terraform

Terraform

In my last post, Learning Terraform, I committed to learning Terraform. I’ve started reading Terraform: Up and Running: Writing Infrastructure as Code. It has given me a great primer thus far. I’m converting my Cloudformation templates to Terraform. I’m all in at this point and am convinced that this is my preferred tool for building infrastructure as code.

The Basics

You’ll want to install the Terraform command-line interface. I ran into some issues with the Homebrew version of Terraform, so I installed the binary. Terraform provides a single binary with zero dependencies, which warms my heart.

I love shell aliases. So, I set up the following aliases for my terraform activities

alias tf='terraform'
alias tfa='terraform apply'
alias tfd='export TF_LOG=DEBUG'
alias tfdl='unset TF_LOG'
alias tfdr='terraform destroy'
alias tfp='terraform plan'
alias tfs='terraform show'
alias tft='export TF_LOG=TRACE'

I’ve found these to be the most common commands I’m running while developing new templates.

If you’re on Windows 10 you can install a linux shell or if you’re stuck in the past you can use Cygwin.

On either platform add the aliases to your .bash_profile then run this command.

source ~/.bash_profile

I recommend using a Terraform IDE extension for completion and syntax highlighting.

Don’t forget the golden rule of Terraform before diving too deep.

The master branch of the live repository should be a 1:1 representation of what’s deployed in production.

If you’re chanting incantations to get your Terraform managed resources deployed, step back and rethink your approach. Following this rule will also allow Terraform to serve as documentation.

Testing

Like any other code starting with tests is a great idea. Proving your code is doing what it’s supposed to do is a cornerstone of great software.

Testing Terraform requires the following.

The tests you write should verify that your resources made it to the environment you are targeting. This can be done by making use of output values fed to simple scripts that check if resources exist after deployment.

Tools like Terratest can give you a leg up if you want to get more sophisticated.

Providers

Learning Terraform requires an understanding of providers. They are a key feature that harnesses the true power of Terraform. The collection of providers that Terraform supports are numerous. I’ve tried out the New Relic and AWS providers with great success.

Importing a New Relic dashboard went surprisingly smoothly. It was a bit painful picking apart the extra attributes after import but overall, it saved time. I’ve added a new Terraform feature proposal to GitHub. Upvote that issue if you’re interested in saving even more time.

Data Sources

Data sources are a key concept that allows you to query resources using various APIs. These read-only resources are usually created outside of your Terraform module. They may have been created manually or in another terraform module.

Terraform data sources are necessary for resources that are not managed by Terraform.

The syntax is super simple. This code example, from the Terraform website, is the perfect use case. If we would like to query an AMI to associate with a launch configuration this is how we do it.

# Find the latest available AMI that is tagged with Component = web
data "aws_ami" "web" {
  filter {
    name   = "state"
    values = ["available"]
  }

  filter {
    name   = "tag:Component"
    values = ["web"]
  }

  most_recent = true
}

Stack Overflow has a post that answers the Terraform data source use cases in more detail.

State

Terraform’s magic is its ability to manage resource state. State management can be painful. Kudos to Hashicorp for taking on the state management challenge. If you’re working on a team this feature is necessary to ensure the stability of your resources. If you’re not working on a team you can get by without considering this feature.

If we’re a team managing state, we’ll need to make sure we don’t cross the streams. Luckily Terraform provides the ability to store your state in a remote data source. This allows you to edit the same resources on a team by storing the state in a central location.

There are multiple backends that can store Terraform’s state.

The most common backend is S3. It’s simple and works, and this is how it’s done.

terraform {
  backend "s3" {
    bucket="<Bucket Name>"
    key="<Bucket Key Where State Will Be Stored>"
    region="us-east-1"
  }
}

The only thing you need to do is add that to your Terraform template and you’re done. If you add this later, you’ll need to reinitialize the terraform state.

Remember

  • Some Terraform state is eventually consistent. If a resource fails to deploy, you’ll need to run it again after fixing the problem. Use the depends_on meta-argument to avoid some of this back and forth work.
  • Valid plans can and will fail. Terraform can’t handle every edge case in the universe. Failures are usually caused by not importing existing resources.
  • Commit to only using Terraform to manage your resources. If you use user interfaces instead weird errors will occur while using it.

Modules

A Terraform module is a directory after running this command.

terraform init

We’re all using Terraform modules by way of using the Terraform command-line interface. Terraform input variables and output values control your module’s behaviors. Access child module outputs with this syntax.

module.MODULENAME.OUTPUTVARIABLENAME

The calling module should handle the provider definition. Here’s an example of calling a module, we already know what a module looks like.

provider "aws" { 
    region = "us-east-1"
} 

module "webserver_cluster" { 
    source = "../modules/services/webserver-cluster" 
}

This code assumes your modules folder lives outside of the Terraform template you are working on.

That’s about all there is to using a module although inputs and outputs will again come into play.

Remember

  • When creating a reusable module, always prefer using a separate resource. Separating resources will allow callers of your module to extend your module with custom rules.
  • Version your modules to avoid breaking dependent code
  • Make your modules configurable for added flexibility

Import

Behold the Terraform import command.

terraform import aws_s3_bucket.jeffbaileywebsite jeffbaileywebsite

Run this command when you have existing resources you would like to manage with Terraform. If you’re migrating from CloudFormation templates, this command will be your best friend.

Importing is awkward, but adding the resource with a local name will allow you to run the import command.

resource "aws_s3_bucket" "bucket" {
}

Once complete, you can run this command to get a representation of the resource you are importing.

terraform plan

If you set up the aliases above, you can simply type tfs to run the same command.

Copy the output of the resource you are importing and run tfs again. This will give you complaints about fields like id that can’t be set in a Terraform template. Remove the invalid fields and run tfs again to see if your local state is valid. If it is, you can run this command.

terraform apply

Check that you haven’t deleted your infrastructure then commit your template. Now you’re off to the races with Terraforming further changes to your resources in the future.

Challenges

Any tool comes with its challenges. Here are some of the problems you will encounter.

Problem Solutions
Avoiding an override of your remote state with local state Deploy changes with a build pipeline that only allows one deployment at a time

Use remote state and diligently run the terraform plan command locally to capture the latest changes

Ensuring resources are in the state they are supposed to be in Create unit and integration tests to validate that your resources deployed as expected

Use an isolated testing environment to validate all your changes and run your tests

Small challenges for great rewards

Conclusion

While learning Terraform might save your life it’s not all roses and sunshine. There will be problems using it like any other tool.

While creating an AWS Cost and Usage Report an internal server error occurred. The aws_cur_report_definition failed to deploy unless targeting the us-east-1 region. When I changed the template to target us-east-2 instead of us-west-2, everything worked. CloudFormation might have been more helpful.

Adoption of Terraform within your team will need a culture change. The team will need to understand and appreciate the benefits of Terraform. Editing a dashboard in a slick user interface is convenient. While it’s convenient to edit a dashboard, it doesn’t share the intent with other team members. Pull requests will prompt your team to question a dashboard change. Your team will also have an opportunity to learn about new features added to a dashboard.

The bottom line

Changing a dashboard can cause your employer to lose millions in lost revenue. If your widget says everything is fine, but it’s not, was the convenience of the UI worth the cost? Make sure your team sees the value before asking them to delve into Terraform.

Overall Terraform is great and is getting better. I’m committed to using it for my IaC efforts going forward.

Continue Learning Terraform

If you work for a company that has stringent compliance workflows watch this video from Ellie Mae. These guys automated pretty much everything to capture every change everywhere.




Open in Visual Studio Code on macOS

Open in Visual Studio Code on macOS using Automator.

1. Open Automator
2. Choose a new Quick Action

Open in Visual Studio Code on macOS

3. Search for the Run Shell Script action

Open in Visual Studio Code on macOS

4. Configure the command with the following settings

  • Workflow receives current: files or folders
  • In: Finder
  • Image (Optional): I used this image
  • Pass Input: as arguments
  • Command: open -n -b “com.microsoft.VSCode” –args “$1”

Open in Visual Studio Code on macOS

5. Save your new quick action and name it something like Open with Visual Studio Code
6. Open Finder
7. Choose a folder you want to open with Visual Studio Code
8. Double-tap or right-click depending on your input device
9. Choose Open with Visual Studio Code
10. Tada!




The Blame Game – Deadly Cut 3

Have you ever been stuck in whodunit limbo, wedged in between two companies with no way out? One company says their product no longer works because another company broke theirs. These days it’s usually a large company that breaks the product you’re using due to an API policy change. In the old days, it was two companies that refused to admit they were responsible for their software failures. What a counterproductive place to be, and it’s not your fault.

Software developers need to take full responsibility for the solutions they produce. It isn’t a user’s fault if you bet on the wrong API, and you’re unable to deliver working features. It’s untenable that users are put in this position daily. This position has bitten me in the past as a user and a developer. It’s not a fun place to be.

Users shouldn’t be responsible for fixing your software problems. If your users need to contact another company to fix a problem with your software, you’ve failed miserably. It isn’t the user’s fault, and there are no excuses. Learn to design better systems that don’t introduce a single point of failure as a feature.

Solutions

  • Don’t put all your eggs in one basket. If your software solely depends on one vendor’s API, reconsider your design
  • Directly engage with the vendor that is causing issues for your software. Make sure you have a way to influence the vendor and resolve problems. Ideally, use open source products and avoid the vendor problem altogether.
  • Determine ways to use alternative vendors to service your users. If a user can meet their use case with a different vendor, then take on the burden of building and keeping that vendor bridge intact to build redundancy.




Push declined due to email privacy restrictions on GitHub

Push declined due to email privacy restrictions is an error that you may receive if you are pushing code to GitHub.

This error occurs when..

To remedy this, run these commands after you have found your noreply email within the GitHub Email Settings.

git config --global user.email "<Numeric Value>+<Your GitHub Username>@users.noreply.github.com"
git rebase -i
git commit --amend --reset-author
git rebase --continue
git push

See this Stack Overflow question for more details.