Lance Whatley

Software developer, father, and husband in Atlanta, GA, USA - https://lance.to

Node package manager (npm) is the package manager that ships with a fresh Node.js install and is arguably the de facto package manager you will use when creating and publishing your open source Typescript/Javascript project(s). Yarn is the most popular and widely known competitor to npm, but this guide will focus on publishing to npm. The following steps though can likely be slightly modified to publish them to Yarn as well (I'll leave it up to the reader to go through that exercise if they need to).

When I first created a package I thought would be cool to open source and allow use by others, I remember having to search in various places across the internet from the npm website to several disparate tutorials to find a complete list of information to publish your package. I feel this might be a barrier to entry for new folks that could be removed if there was a single, short, but complete list of steps to get started.

1. Finding an available name

This can likely also be the last step in this list, but because it's nice to go ahead and scaffold your project with the name you will call it, you can confirm up front whether the name you'd like to use in npm is available.

To do this, simply navigate to https://www.npmjs.com/package/$YOUR_PACKAGE_NAME and if you get a 404 error, your package name is available to use! For example, if you wanted to name your package mygreatpackage, the following screenshot shows this package name is available to use on npm today.

npm package is available!

2. Add this name to your package.json

What a package.json file is and does within the context of writing a Typescript or Javascript package is outside of the scope of this tutorial, but what is in scope is making sure npm knows what your package is named and where to publish it. You need to make sure the name field is set to the name you decided on in step 1. above.

There might be a lot more fields populated in your package.json, but here illustrates what the name should look like assuming you name your package mygreatpackage

{
  "name": "mygreatpackage",
  "version": "0.0.1",
  "description": "The best open source package available on the internets."
}

3. Consider hooks/callbacks that need to be ran at publish time of your package

We won't go into detail on the various scripts and callbacks that can be executed when you publish your package to npm (or other package managers) since there is an entire blog post or two we could write discussing it, but you should consider, understand, and possibly use the various lifecycle hooks available at publish time so your package is completely ready to be used when a random user on the internet installs it in their project.

4. main and/or bin

Is your project a library that will be used inside a user's code base like Vue or React? Is it a CLI tool that someone should be able to execute to accomplish some goal from the terminal like Gulp or Grunt? The answer to these questions determines whether main or bin (or both) need to be populated in your package.json. If you built a library that a user should be able to use in their project with import MyGreatPackage from 'mygreatpackage', you should provide the entry point to your library in the main field

{
  "name": "mygreatpackage",
  "version": "0.0.1",
  "description": "The best open source package available on the internets.",
  "main": "./dist/mygreatpackage.js"
}

and if you have an executable that should be able to be called by the user from the terminal via $ mygreatpackage (if installed with the -g flag with $ npm install -g mygreatpackage) or $ npx mygreatpackage when installed in the current project context, you need to supply an entry point to the executable in the bin field

{
  "name": "mygreatpackage",
  "version": "0.0.1",
  "description": "The best open source package available on the internets.",
  "bin": {
    "mygreatpackage": "./bin/mygreatpackage"
  }
}

5. Publish your package!

When you're satisfied with your package it's time to publish it to npm. This is as simple as running

$ npm publish

in your terminal! Assuming everything goes as planned, you should receive a confirmation from npm that your package was published! You should now be able to run $ npm install mygreatpackage from any other project and it will install your package in that project scope.

6. Package versioning after adding features and/or making changes

You will almost certainly make updates to your package and add (or even at some point remove) features. In order to publish your changes, npm requires you update your package version in package.json as you cannot publish over an existing version.

npm uses semantic versioning (semver) version numbering when publishing your package and it's likely not a bad idea to update your package versions based on the ideas from semver so your users can version your package in their projects in order to, for example ensure they always have the latest patch release of your package within the major and minor version to prevent installing breaking changes of your package in their project. You can use the following commands which npm will create a new commit of your package with the updated versions based on a couple helpful commands:

$ # update your package to a new major version
$ # 0.0.1 will become 1.0.0
$ # 1.0.0 will become 2.0.0
$ # 0.5.5 will become 1.0.0
$ npm version major
$
$ # update your package to a new minor version
$ # 0.0.1 will become 0.1.0
$ # 1.0.0 will become 1.1.0
$ # 0.5.5 will become 0.6.0
$ npm version minor
$
$ # update your package to a new patch version
$ # 0.0.1 will become 0.0.2
$ # 1.0.0 will become 1.0.1
$ # 0.5.5 will become 0.5.6
$ npm version patch

After you update the package version you can now republish, $ npm publish to publish the latest and greatest code to npm.

BONUS: scoped packages

If you have an npm organization where you'd like to publish all your packages to instead of just the global npm scope (which is the default), you can deploy scoped packages in the form of @your-org/package-name. This might be applicable if you are publishing privately scoped, closed source packages that you're using in several projects and want a single place to pull the latest code from, or even if you are publishing open source software and want it to exist within the scope of the organization for IP or other reasons.

By default scoped packages are privately scoped and not installable by anyone outside of the npm organization, but you can publish to a public scope which would treat it like any other npm package in terms of being able to install it. We won't discuss this in detail in this post, but maybe we'll talk more about it later! :)

Some epic rivalries:

  • Batman vs. Superman
  • Right vs. Left
  • Tabs vs. Spaces
  • Monolith vs. Microservices vs. Monorepo

Monoliths, Microservices, & Monorepos

You want to get a stale, probably introverted nerdy software development team heated up? Ask them if they prefer tabs vs. spaces. Or maybe which programming language or IDE reigns supreme. One final debate that's sure to get the room stirring is whether you should build your project as a Monolith, with Microservices, or as a Monorepo.

To those who might not have heard these terms or concepts, or have but with limited detail and don't know exactly what they are, here's a short* break down:

  1. Monolith: Think of this as a single codebase that is used to build all of the functionality that your product or business needs to get things done. You probably have a backend with a single API, a single frontend, and single test suite across the entire codebase regardless of how big or small it is.
  2. Microservice (multi-repo in the image below): A small codebase that does one thing, or a very small number of things well. Your business or product likely contains lots of small services like this, all of which focus on solving a very specific problem. A single microservice is likely a standalone API or something similar that has an easy interface to communicate with other services (HTTP, REST, GraphQL, etc.), and many times has its own standalone, segregated components separate from any other service (think database(s), caches, test suites, etc.)
  3. Monorepo (or mono-repo): A single repository that contains much, if not all of the code for a business or product to function, but that could be and likely is separated in some logical structure to separate services, apps, APIs, SDKs, or other codebases into their own little buckets inside the repo. You can likely git clone this repo and it will fetch all the code that exists for the business and/or product, but there are segregation of concerns based on the file structure inside.

Monolith, Microservices, Monolith

* There's a ton more detail we could add about what constitutes each of these concepts, but in an effort to encourage conciseness we'll keep it simple for now.

90% Monolith

To start, there isn't a one size fits all answer as to which structure all projects should use. The answer depends on a lot of things like your use case, business, product/app design, etc. In saying that a large majority of products/projects/apps/etc. aren't revenue generating and/or don't make it past a few hundred or a few thousand concurrent users. What I'm about to say isn't data driven and I don't have any references to support this number other than anecdotal experience, but I would argue somewhere around 90% of projects never need to go beyond a Monolith.

You may be asking, “this article is written to advocate for Monorepos, why are you saying such a large percentage of projects simply need a Monolith?” That's a good point, but simply put a monolithic codebase frankly makes things easier. Having a single codebase where all your business logic lives and all your APIs use the same language and framework(s), authentication mechanism, middlewares, database(s), etc. will generally provide a quality of life improvement and save time. You can focus on solving your business problems and not spending a bunch of time scaffolding a new microservice and making decisions about mundane aspects like what programming language to use, what database(s) make the most sense, how to handle authentication, how to support communication to other services, what test suite to use, etc.

That being said, Monoliths can have several limitations and problems when a project becomes bigger and starts to scale in a relatively significant way. Here are a few scenarios that you might find yourself experiencing when it's time to start considering and making the move from Monolith to another architecture:

  • Your successful app that is now scaling might have changing, more stringent performance improvements and your Ruby on Rails app isn't fast enough to consume and respond to thousands of requests per second in a reasonable time.
  • Your development team has grown from two to ten people that make up two or three different teams and now when they're working on their tasks or projects you start to notice large merge conflicts taking more of their time to resolve when merging to the main branch.
  • In a year you went from a few gigabytes of data to now approaching your first terabyte of data. Your single database is struggling to scale and keep query execution times to a reasonable and expected level (single to tens of milliseconds).

So...... Microservices? Monorepo?

Instead of talking through the pros and cons of Microservices and Monorepos to describe how you can structure your app(s), I'll walk through why and how a Monorepo has been such a success for Phalanx at risk3sixty and why we opted for it over Microservices.

Phase 1: Create our first microservice

A little over a year ago at the time of writing we had a Monolithic Node.js web app with a Vue frontend. We had several background jobs using node-resque and all of our data was stored in either a single Postgres or Redis database. The catalyst that triggered us to consider and ultimately separate a service into its own repo with its own dependencies, APIs, tests, etc. was due to the size of our Monolith and slow build/deploy time. We originally used ES6 and ES7 compliant Javascript throughout our app and babel to transpile it. We were starting to make a transition to Typescript as well, so our build chain was compiling ES7 Javascript to code that the currently-supported version of Node.js could run, Typescript files to Javascript, and a number of additional downstream tasks that would get our app ready to deploy. As you can imagine, as we built out business logic and APIs, the codebase grew and the amount of time it took to build took longer and longer.

The first service we broke out, what I'll call our image service, of the Monolith was a service that had puppeteer and sharp as dependencies and was a simple API that would take URLs to take screenshots of (using puppeteer) or convert images to the specification provided by the user (would support resizing images, changing colors, etc.) Obviously instead of just adding new API endpoints with the required code in our Monolith to support our use cases, we had to setup a new standalone repo, package.json file with all dependencies, web server, middlewares required, determine how to organize endpoints, etc. We also had to build out the library we would use to communicate with this new service from our original Monolith where the majority of our business logic existed since we could no longer simply import Dep from './dep' like we might have done previously.

While this process took a little more time than it would've to just add our APIs to our Monolith, ultimately once we were finished with the prototype we now had a new app that took a fraction of the time to build and run than it took our Monolith. That alone made a huge impact and we were satisfied with the result.

Great! Now let's run it all together

Awesome, we now have a Dockerfile and docker-compose.yml in our Monolith that starts our main app and all dependent databases and such in their own containers and a new Dockerfile and docker-compose-yml in our image service that runs it. Uh oh, how would we handle networking between different docker-compose environments? The best option is to have everything in the same single docker-compose.yml so we can name our services and subsequently setup our environment so all services can easily communicate between each other. But where would this new, aggregate docker-compose.yml file live?

Monorepo it is

The way we solved this was to instead restructure our main codebase to add some language namespaces, cmd vs pkg directories here to distinguish between standalone apps and libraries/SDKs, and finally individual repositories. At this point we can create a root docker-compose.yml and add all Dockerfile contexts based on the service(s) we need to include, and easily combine our original Monolith web app with our new image service. This worked great and we were up and running both services and able to communicate between both with little headache.

Our directory structure for our Monorepo was as follows after adding a couple SDKs, libraries, and beginning a Go API.

phalanx/
├── .circleci/
│   └── config.yml
├── nodejs/
│   ├── cmd/
|   |   ├── img-service
|   |   └── phalanx
|   ├── pkg/
|   |   ├── phalanx-node-sdk
|   |   └── phalanx-utilities
├── go/
│   ├── cmd/
|   |   └── phalanx-go-api
│   └── pkg/
|   |   └── phalanx-go-sdk
├── docker-compose.yml
└── README.md

Conclusion

I'll reiterate again that there is no one size fits all solution to how to structure your project(s). The answer depends on a number of factors from size and scalability needs to what you're comfortable with as a developer. Our Monorepo experience has been outstanding so far and we now have ~20 different packages, libraries, apps, and APIs within our Monorepo that are painless to make changes to, add features, run tests, and deploy.

Not only is the R&D experience nice, but we can setup CI to only run tests within the repo(s) we're working on, so you're not running all tests in the entire set of apps on each deploy, just the ones you want or that have changed. Finally, teams can own their own apps or libraries and you're almost never going to cause merge conflicts with the main branch against other teams codebases even though you're technically working in the same repo.

As you scale your business and apps, I highly recommend looking into this structure as it supports rapid prototyping and development and keeps things clean and scalable so you can focus on the business problems your solving!

The story of our baby boy Max, born on May 26, 2020.

How it started How it's going
Max Early Max Now
Max Early2 Max Now2

First trimester screening – high risk for down syndrome

It was the holiday season in mid-December 2019 and my wife, Nicole and I were at the gym trying to burn a couple calories before we headed down to my Dad's later that day to celebrate the holidays with his side of the family. After our workout, I was waiting for Nicole in the lobby and it took her a little longer than usual to come out of the locker room. She finally came out with tears in her eyes. I comforted her and asked what was wrong. She said the results came back from her first trimester screening and there were elevated protein levels indicating a higher risk of our baby having down syndrome. More tests were needed to confirm.

I'm not sure if it was because my initial instinct was to comfort Nicole since she never cries, if I knew it didn't matter what abnormalities or conditions our baby had that we would love him/her just the same, or a combination of both but my initial impulse was the opposite of panic or worry for the future. I was totally comfortable with what I just heard and knew that no matter the prognosis that we would do what we needed to make it work.

I said what I could to comfort Nicole, we went back to the doctor that same day before heading to my Dad's to get a follow-up highly sensitive and specific test that could confirm with 99% certainty via DNA analysis whether there was a chromosomal abnormality, and we went about our day and week. I'd be lying if I said the thought wasn't constantly on my mind since it would in fact change the course of our lives depending on the results, but I would describe the feeling I felt as more anxious and even excitement than fear or concern and we were ready for whatever might come our way.

The bilateral cleft

On our next visit to the doctor to get the 20 week anatomy scan (we visited around 15-16 weeks because of the previous results we received), the doctor casually came in after the radiologist did her work and said, “I guess you know by now your baby doesn't appear to have down syndrome” as if we should've already received the news. Nicole and I looked at each other and smiled, but I know our heart rates and emotions were racing even though we maintained our poker faces as to not look like weirdos in the room with the doctor. But then he gave us some new news, that our baby appeared to have a bilateral cleft. We asked some questions and finished up, went home, and I think we independently went on a research bender to find out as much information as we could. In short, the most pertinent information we found was:

  • There's doesn't appear to have been anything we could've done differently to prevent it.
  • With a bilateral cleft (meaning both sides of the face are affected as opposed to one side, or unilateral), there's a higher chance that the palate is also affected.
  • There will be at least one surgery if only the lip is affected at around 3-5 months old and another one at the 10-12 month mark if the palate is affected.

Nicole joined some communities of mothers with cleft babies and started to learn about their journey. We were ready, at peace, and comfortable with whatever came our way.

Most importantly though, we were ready to meet our baby boy.

Giving birth in a pandemic & Max's little button

To put things in perspective, in May we were well into the COVID-19 pandemic and most of the world was already in the height of a quarantine to prevent the spread of the virus. There were strict protocols at the hospital where only one additional person could accompany Nicole and no one was allowed to wait in the lobby. We were on our own, but to be honest I'm not sure we wanted it any other way. Nicole maintained an extraordinarily healthy diet throughout the pregnancy and was jogging and walking a couple miles a day multiple days a week up until about a week before we had Max, so her body was ready to ensure our little Max man came into this world without complications.

On May 26th morning within a couple hours Max was born into the world and we were overcome with joy and happiness. He had the most distinct cry that we still recognize to a tee today, and he was beautiful in every way including his cleft lip, palate, and his “little button” as we called it (the little piece of skin at the philtrum right underneath the nose). With a bilateral cleft lip, the skin under the nose didn't close completely during the first trimester and depending on how far it develops, can leave a small piece of skin where fusion would've usually occurred. We loved his little button.

The first surgery

This was probably one of the hardest moments of Max's journey so far. To clarify, it wasn't hard at all for that little guy, but hard for Nicole and I. We thought we were prepared for all of the details, and we were absolutely prepared from a logistics and technical perspective, but not from an emotional one. Nicole held him as we took him to the entrance back to surgery, we placed him down on the little baby gurney, and watched him as they rolled him back. He looked back at us and instantly started crying as if we had abandoned him with strangers and we were never coming back. It tore us apart as we went back to our room and we both broke down as soon as we entered it and shut the door. This feeling was completely biological, uncontrollable, and illogical for the most part. I mean, in a little over an hour we knew we'd be with him again holding and comforting him as he began his healing, but let me tell you the thought of relinquishing control of your baby to someone else was one of the most incredibly humbling experiences. We had complete trust in his care team, but I'll never forget the moment of him rolling forward looking back at us and the doors shutting as the nurse kept walking away from us.

When they brought him back to what would be our permanent room for the next day and a half, I knew the very moment I heard his faint cry getting closer that it was him. We were both ready to see and hold him to make sure he knew it was okay. Needless to say he looked pitiful, and it took him a number of hours before he was ready to try and eat. All things considered though, it went smoothly and we were really happy with the results and experience. Big props to our surgeon and Children's Hospital of Atlanta (CHOA) for taking good care of Max during this time!

Bliss at 7 months old

My mom is one of the most caring and loving people on the planet, so much so that there were plenty of times throughout my childhood and teenage years I would get annoyed at her wanting to do so much for us. She always wanted to be involved however she could in what me and my siblings had going on. She would occasionally remind us that “you'll understand one day why I am like I am when you have children.” We would roll our eyes and go about our day. I can now say I know exactly what she means. It is incredible the bond you develop with your child during the first few months of life, and now that his personality is coming through strong and is constantly laughing, jumping, and starting to crawl, this journey has been one of the best times of my life.

We are excited to say that Max is doing great and we've successfully avoided getting infected with COVID-19 so far (knock on wood). Max is still eating breast milk out of a bottle and eating pureed avocado, carrots, sweet potatoes, and other healthy veggies. He's as normal of a baby as you'd expect and loves to read books and play in his bouncer! He's starting to find his voice and beginning to crawl ever so slightly, which is an inspiring sight to see a baby finding new tricks and learning about the things around him. His 2nd surgery is coming up to repair his palate in mid March so we're preparing for that. We're excited to continue our journey with Max and will update in the coming months on his progress!

Max w/ Banjo on the bed

How and why we use the Stellar blockchain in Phalanx for tamper-proofing

“Tamper-proofing” in the blockchain, huh?

Blockchains were developed with some core properties in mind: decentralization, transparency, and immutability, making it particularly useful for certain use cases. These properties enable and empower transparent and secure financial systems which is what most people think of when they think blockchain and cryptocurrencies (e.g. bitcoin), but there are several creative use cases for blockchain that are not particularly main stream today. By understanding and utilizing things like SHA-2 hashing and transparency and immutability, it's possible to store digital signatures of data in order to at a later time come back and validate with certainty that this information was in fact true and existed at a particular snapshot in time.

At risk3sixty we store audit evidence for customers when going through internal or external audits (SOC 2, ISO27001, PCI, etc.), or when they're simply using our tool as a GRC platform. For audits in particular though, it's important to be able to validate at a later point in time that an artifact provided as evidence is in fact the same document that was provided days, weeks, or months ago when the audit was being performed. This is where we can utilize blockchain to implement this tamper-proofing use case. Because of its immutability (i.e. transactions cannot be changed) and transparency properties, the blockchain is the perfect tool to use to implement tamper-proofing for your application.

Example

You can take a look here at an actual Stellar blockchain transaction which has a memo containing the SHA-256 hash (8Mmc/UhiZ2GmVgRcT2F7CvkY/3Q+A7LUXaYFLE950BI=) of a policy that we uploaded while a consultant was performing an ISO27001 internal audit for us.

What is Phalanx?

At risk3sixty we're building an audit and GRC platform to help high growth tech startups manage their security programs, ongoing compliance efforts, and risk effectively with ease, efficiency, and transparency across the organization. Within our platform we perform internal and external audits for our customers and provide tools and functionality to support ongoing compliance and security efforts.

Because the things that are done within our tool are subject to audit themselves, we wanted to provide a meaningful way to prove that things like audit logs and evidence artifacts are in fact the same as they were when they were logged/uploaded. We realized utilizing the blockchain was a perfect use case for this.

Why Stellar

The Stellar Development Foundation (SDF) describes Stellar as “... an open network for storing and moving money”. Although they focus their marketing and R&D on enabling fast and decentralized payments among individuals, they have all the features needed to support our requirements of a tamper-proofing use case and more.

  1. Low fees: The Lumen, or XLM, is the native cryptocurrency or asset that's used within the Stellar blockchain to move throughout the network. At the time of writing 1 XLM == ~$0.30 USD and transaction fees on the network are currently 0.00001 XLM per transaction. As you can see and compared to other blockchains, the Stellar network charges an extraordinarily small amount to execute transactions making it attractive for anyone wanting to use it for a particular use case.

  2. Fast transactions: Bitcoin transactions can take minutes to hours to complete. The Stellar network uses a consensus protocol (instead of a proof of work or other algorithm that are used in other blockchains and are slow and inefficient) that supports both security and near instant transactions. You will see your transaction complete in a matter of seconds upon submitting it, which is great for the impatient.

  3. Bump sequence: The Stellar blockchain supports a number of the usual transaction types one would find when banking or managing money, but one in particular is virtually (not quite though) a noop in which doesn't transfer any assets to anyone. It can however store a transaction memo just like any other transaction where our SHA-256 hash will live in the blockchain forever. Bump sequence is this transaction type that we decided upon to support our use case without having to move assets continuously between accounts. There are a couple other transaction types we could've used like manage data, but bump sequence was the simplest and easiest to implement at the time.

Conclusion

As you can see, if we hash an artifact or audit log content at the time it is uploaded or populated and put that hash into the blockchain then, we can at a later point in time hash the currently stored data and, if the currently generated hash matches that of the hash in the transaction we executed days, weeks, or months ago, we can confidently conclude that this data has not been tampered with and it's the same as it was back when first populated.

As an added bonus we spent $30 to purchase XLM to implement our use case within the Stellar blockchain back on March 30 2020 at $0.0404 USD/XLM (~694.128 XLM purchased). We now have ~690 XLM left which, at ~$0.30 USD/XLM means we now have ~$207 USD worth of XLM. Not too shabby of an investment if you think about it ;)

Shameless plug, xlmfile.com

You can use a super small frontend-only utility I developed to use a funded Stellar account/key to populate a transaction with the SHA-256 hash of a file you upload.

Originally written October 9, 2020

As software engineers*, particularly when working at our employer, we're often writing software to be used by other people. In most cases it's our company's customers, and if we're doing it right we're writing focused, well thought out tools and apps that serve to solve our customers' business problems. This is purposefully vague, but suffice it to say the point of why we're getting paid money to write lines of modular, logically organized lines of camel case words and syntax is because someone we probably don't know personally (the customer) is paying us to do it. There are likely other people at your company like business analysts, marketers, and even sales reps/engineers who's responsibility it is to do market research, talk to and gather feedback from prospects and customers, and ultimately help drive the roadmap for what you end up writing. Rarely though do we get the opportunity to use the software either because it's not useful to us or we're not responsible for the activities done in our organization that the software helps with. We also don't usually get opportunities to hear things from the horse's mouth (the customer) about how we should write something based on their current workflow or why.

Unfun problems

Vendor & Risk Management

At risk3sixty our R&D efforts with Phalanx are focused on solving security and compliance problems that we've struggled with ourselves. As a manager on the engineering team, I'm responsible for performing vendor management reviews for a few vendors we use in our software workflows (object storage, CI/CD, PaaS). Where do I schedule a recurring calendar task to perform these quarterly reviews? I guess I'll just throw a quarterly recurring invite on my personal calendar (but wait what happens when I'm on vacation during the next scheduled review date...). When performing a review, I've identified a couple risks that I wanted to document and track through to either remediation or simply document risk acceptance depending on its severity and potential action items to remediate. Where do I document the risk? Let me email my boss the risk(s) and have her propagate upwards to have someone with access populate in the master organizational risk register spreadsheet. Oh wait, we actually added a SOC 2 control during design of controls that was added specifically to mitigate a risk posed by one I just identified for a vendor. I want to document that this risk is associated with that control, should I tell my boss to add a note in the risk register which control this risk is mitigated by, or do we have our list of SOC 2 controls in a spreadsheet or somewhere and link the risk there?

All of the italicized bits above are real questions or concerns that not only we were faced with, but also are our customers when effectively managing organizational risk and designing a program to manage said risk across the board. I'm able to actually use the software we're writing to solve our customers' problems to solve my own now. With more and more companies not doing business with vendors who are not ISO 27001 certified, SOC 2 compliant, or other niche industry-standard frameworks, being able to effectively and efficiently perform the mentioned actions above is paramount in keeping things organized and ensuring future audits are run smoothly and quickly later on.

Audit Spaghetti

When performing a SOC 2 or ISO 27001 audit, obviously a huge chunk of the time is scheduling walkthroughs to help guide you to organize the right people together to obtain evidence that you either are or are not implementing the necessary controls required to be compliant with the framework. Anyone who has gone through an audit like this from the customer side is aware of this walkthrough and evidence collection process (which depending on the firm might or might not happen by sending files across emails [facepalm]). What you don't usually see are the number of spreadsheets, word docs, internal emails, management review (inside the same spreadsheets and word docs), and ultimately sharepoint spaghetti that takes place on the backend between evidence collection and report generation to successfully execute a full engagement. This is incredibly inefficient and we've aimed at streamlining the audit process for both the customer and auditor to provide the most meaningful information and simplest steps to perform the audit in an easy and efficient manner. Now when I have to provide evidence to our auditor for something from the R&D side or any of our consultants are performing audits, we utilize all the tools we build in Phalanx to accomplish the tasks at hand.

Eating our own dog food

For our customers where it's import to remain compliant with the frameworks their business and/or industry requires, it behooves them to have and maintain processes in place to ensure you're regularly doing user access reviews, vendor reviews, maintaining and acting on an up-to-date risk register, keeping an up to date asset inventory, and many more. For me, I'm able to maintain a level of sanity for myself with the added benefit of continuing to build quality software all while performing my compliance duties for risk3sixty.

In Phalanx we have purpose-built modules and functionality to help you organize and maintain your entire security and compliance efforts in a single pane of glass:

  1. Assessments – Perform your own internal audits to allow for auditing business units, departments, etc. for compliance with organizational controls and/or identify risk.
  2. Compliance Calendar – Create recurring tasks and assign to stakeholders and those responsible for performing compliance actions.
  3. Vendor Management – Document all of the third party vendors you do business with, perform regularly scheduled vendor reviews (integrate with compliance calendar), send out questionnaires to vendors to answer security questions, quantify risk associated to business, etc.
  4. Risk Register – Document and organize organizational risks you'd like to track and document through to remediation. Link risks to vendors (or specific reviews), calendar events, audit controls, etc.
  5. Inventories – Build a generic inventory of “things” (think building a custom BIA, asset inventory, organizational vulnerability list, etc.) We maintain a list of assets and a BIA among other things and link to assessment controls, risks in registers, owners, etc. in these inventories.

Use it, use it , use it

In closing, I'm here to say that as an engineer the best tool you have at your fingertips to help build useful software is to use it yourself. Not everyone is empowered or is able to do this, but I think it helps tremendously in helping your customers remain happy in the long term.

* I've seen some heated debates as to whether software developers deserve the title “engineer”. The most common argument it appears to not give them the title is most engineering practices require some industry standard credential or certification that gives them the credibility and in some cases legal ability to practice that particular classification of engineering. There doesn't exist the same thing in software, but I argue most software developers have not only the aptitude, but possess the skills necessary to be outstanding engineers, so I like to continue to think of myself as an engineer.

Originally written September 23, 2020

Having worked in technical fields/software for a number of years now, I primarily focus on developing & keeping my hard skills sharp in order to stay ahead of the competition. Although I'm confident I have a decent handle on the typical soft skills that are advantageous to have in an office setting, I have nevertheless invested very little time maintaining them. Since my wife and I had our first child in May during the COVID-19 quarantine, I've realized there are a large number of skills that coalesce between business and raising a child.

Trying to put an untired baby to bed for nap time to maintain a consistent daily routine requires countless skills that are beneficial in business and tech. Here are just a few:

  1. Patience – This goes without saying, but if you aren't patient with your little one when he's tossing and turning in his swaddle while your lower back and shoulder are about to fall off from leaning over the crib and holding his pacifier (paci), you're in for a tough time. You very likely have had to be this patient with your prospects and/or customers at a given point in time, sometimes even as frequently as you do with your infant :). Remaining diligent in your patience almost always pays off, so when you have a customer giving you a hard time try your hardest to understand their challenges and what you can do, if anything, to fix them.

  2. Finesse – When your child is pushing and pulling his head left and right because he's angry about you forcing him to nap, you can approach this one of a couple ways: allow him free movement and elegantly move your hand that's keeping the paci locked in his mouth or resist his motion and cause pressure to keep his head relatively still as he moves in his tantrum. In my somewhat large (and growing) sample size, the first option works way better a higher percentage of the time. Whatever your role is for your company, you always want to go about your business and execute your mission with the highest quality possible. This probably requires consistency, process, and likely some controlled creativity, all of which I would argue adds to effectively having finesse in your work.

  3. Stealth – Some of you know where I'm going with this, but once little one has finally shut his eyes and stopped moving around, you've gotta find a way out of the room and quietly. I've developed some pretty batman-ish stealth to be able to slide out of a chair, tip toe to the door, and open/shut it without even a peep. In business, sometimes you're planning a pivot or building a new product and want to keep it under wraps no matter how excited you and your team are for its debut. There are reasons you might not want to be stealthy in these situations, but there are times you do and knowing how to go about your internal business quietly is important.

  4. High EQ – the emotional quotient, or emotional intelligence, is the idea that someone can effectively understand what and how they're feeling and what and how others are feeling usually through nonverbal communication (although doesn't have to be nonverbal). Babies obviously wear their emotions on their sleeves, but effectively preparing to put your baby down for a nap especially when he isn't tired requires you to know your baby on a deep emotional level even if he can't talk yet. Delivering a timely song, mantra, or other learned technique to calm him down and get him tired to fall asleep during a nap can make all the difference in the overall time it takes to get him down. Having a high EQ in the workplace provides so many benefits as both an individual contributor (IC) and leader. As an IC you can gauge how your peers are doing and help out when needed (or they can help you when you need it), in addition to being able to communicate effectively with your boss(es) about how things are going. As a leader you can effectively communicate & motivate your team to believe in and carry out the mission at hand with consistency, efficiency, and quality.

There are so many more qualities having our baby has brought to the forefront in terms of skills (both hard and soft) that I haven't needed to develop yet or build upon in a long time, but I can say it's both challenging and amazingly fun all at the same time.

What are some skills having kids taught you that you were able to bring along back to the workplace to help your career or goals?

Hi, I'm Lance, a software developer who loves to learn and explore new things. Having previously worked as a technical account manager, technical coordinator, and implementation consultant, I try to use the knowledge and skills I've attained through customer facing interactions to build powerful yet easy-to-use software and systems.

When I'm not focused on studying, writing new, and/or reviewing existing software, I spend the remainder of my time with my wife and 2 cats in Vinings, GA, USA fiddling with technology and new ideas.

That's all for now, but I'm planning to write about things from fun and cool technical topics I run into, all the way to things about my family and experiences. See you soon!

function sincerely() {
  return "Lance";
}