What is Code Signing?

Before understanding what “Code Signing” is, let’s try to understand two terms prevalent in computer security – Authenticity and Integrity.

Authenticity – is talked about in all walks of life. We usually ask, if the product that I am buying is authentic or from authentic source. Does the labeling confirm what it claims to be? So, authenticity is one form of validation of identification. For a Code authenticity is nothing but validating authors identity.

Integrity – in humans is taken as quality of being honest. An honest human is trustworthy. Similarly, data/software integrity refers to trustworthiness. It means that the code has not been tempered or altered over the period.

Now that we understand what Authenticity and Integrity is – “Code Signing” is a method which helps validate both authenticity and Integrity!

A signed code is authentic as it validates authors identity and ensures that the code has not been altered with a malicious code that can cause damage to the applications.

In SQL Server, you can Code Sign, procedures, functions, triggers and assemblies. This can be achieved using Digital Certificates or Asymmetric Keys.

Static Data Masking

Data masking as we all know is a data protection layer which replaces/scrambles/masks sensitive data being disclosed to unwanted/unauthorized users. “Static Data Masking” also known as Persistent data masking is a method to protect data at rest. It is a new security feature released in SQL 2019 (available for public preview at the time this blog was written) that helps users create a copy of masked [sensitive] data from production environment. Using this feature, copy of the live data is crated with appropriate masking functions; and the masked copy can be shared with users who intend to work on non-live data. This feature also helps keep organizations compliant which are subject to data protection/privacy regulations such as GDPR.

Data masking process starts with users configuring masking operations for the columns in database which contain sensitive information. Data is copied to a new database during the data copy process from the live system and then masking functions (according to the masking configuration) are applied to mask the data at column level. Unlike dynamic data masking, static data masking is persisted and irreversible (one-way process), original data cannot be retrieved.

Static data masking can be used for development, testing, analytics and business reporting, compliance, troubleshooting, and any other scenario where specific data cannot be copied to different environments.

 

References:

SQL Server 2019 CTP 2.1

Replacing a Legacy System: Part-2

In my last post we focused on “WHY” we should replace a Legacy system. In this post we shall try to understand a few important challenges that should be addressed.

  • Data Loss

One of the biggest challenge for any data/application migration project is that target system (new technology system) should readily consume source data (legacy system). However, there are plethora of tools, especially created to support migration, but it takes thorough analysis to map, transform and migrate legacy data. Often, Data Architects (DA), are accustomed or well versed with either source or target system (and rarely both) – the successful migration depends on their dexterity to mitigate gaps and build bridge between both source and target systems, making data movement very soothing.

  • Data Transformation/Cleansing

Data transformation is the heart of any migration project. This involves complex analysis of existing data, meta-data, business constraints, data integrity constraints, mandatory attribute, defaults attributes, derived attributes, etc. DA’s have to wear multiple hats as a business analyst, system analyst and developer, often taking to business stakeholders, infrastructure team, and project team to design migration specifications and transformation rules. Hence making data ingestion an error-free process.

  • Data Quality

As it happens, some legacy systems are poorly designed, which leads to lot of data-quality issues. These issues should either be fixed, by recommendations from business or they should be filtered before migrating the data. Apparently, filtering one such erroneous record means leaving behind other good quality records (linked together) behind in the legacy system due to integrity constraints. That is where an intelligent DA will extrapolate and try to fix such erroneous records using pseudo values. Again, in the end, it is a business decision and by all means [any modifications] should be agreed upon by the stakeholders.

  • Reconciliation

Error-free migration does not guarantees that migration is successful – Validation is the key! Best way to ascertain, is to reconcile a use-case between legacy system and the new system. There are many tools that can reconcile data, but manual validation is required by executing same use-case on both systems and recording the out-come. Reconciliation packages should also determine the over-all quality of migrated data and special attention should be given to validate – data loss and data quality.

Replacing a Legacy System: Part-1

Legacy applications are like “First Love”, really hard to let-go! Whilst, it’s an up-hill task to replace them, there are multifarious reasons why replacing a legacy application is beneficial. Here are few reasons that are critical and deciding factors for replacing legacy applications.

  • Cost or Cost-Effectiveness

This is a prime factor for replacing legacy systems. Major chunk of money goes into maintaining the applications and without debate, any organization would have to do so; but is it worth maintaining an app with almost obsolete technology. If I were a CFO, I shall always ask “What ROI does supporting my current app bring vs replacing it with new one?”. Whilst, replacement cost of the legacy application is lower than maintaining it, hence it’s a desirable action in this direction.

  • Integration Challenges

Legacy systems do not always integrate well with the latest technology systems and integration-pain via custom-written apps is more than replacing the entire system. In the era of IOT (and ubiquitous computing) where all devices communicate with each-other and integration is seamless, it makes more sense of adapting the new technology and replacing the old ones.

  • Productivity

Productivity is another enabler in replacement of legacy systems. As basic definition of business changes from product-centric to customer centric, use-case have become more complex. Legacy systems have been doing the heavy lifting (amending and sustaining) at the price of productivity (cost and time). New technology and apps have configurable business-rule-engine, that enhances productivity and makes the process of future adaption more sublime.

  • Laid-back Decision Making

Decision Making has been a challenge while using legacy application because, accessing data and churning out meaningful insights (data analytics) require a separate decision making system. Data from the legacy has to be transformed and refreshed into the decision making system coupled with reporting solution for presentation. Business stake-holders always took “reactive” decisions based on the historic trends (or incidents in terms of security). With the advent of bleeding-edge technology and in-memory data processing, stake-holders are in a position to take pro-active decisions (also mitigate timely risks in terms of security)

Now that you understand WHY legacy applications should be replaced; In my next blog, I shall try to list down important technical challenges that we overcome for smooth replacement.