




Alex - stock.adobe.com
Once a developer finishes writing code and it works, it can feel like the job is done. It is not. The job is over when the code is refactored and cleaned up.
Developers inevitably work to a deadline and try different approaches, which can result in artifacts left over in the code. Even if artifacts aren't refactored, developers should stillwant to go back into their code and see how to improve it. Refactoring helps keep costs down in the long run, too, as it makes the code more maintainable and readable andreduces technical debt.
Here are several approaches to code refactoring, with examples that show how they work and pitfalls to avoid when applying a refactoring pattern.
The first and most important refactor is always to remove complexity -- the old mantra "keep it simple, stupid," or KISS. This is a broad category, from finding a simpler data structure to use to simplifying an algorithm. Simpler code is both easier to support and extend.
This particular refactoring decision typically comes down to simplifying the logic used in conditionals in the code. When developers write code to get it done, they generally do not use the most optimized version of the logic for a set of conditionals. Take the time to go over these conditionals and see if the logic they implement could be done in a different, simpler way.
The goal here is to get as simple as possible and still fulfill the objective. As an example, consider this set ofif statements:
let leapyear = false;if(year % 4 == 0) { if(year % 100 == 0) { if(year % 400 == 0) { leapyear = true; } } else { leapyear = true; }}They can be reduced to this:
let leapyear = ((year % 4 == 0) && (year % 100 != 0)) || (year % 400 == 0);When working to a deadline on code, it is easy to repeat similar sections of code throughout the codebase. This refactoring principle of "don't repeat yourself" (DRY) aims to identify similar sections of code and distill them to a single function. This reduces the overall size of the codebase and makes it easier to follow.
There is a limit to this refactor. It takes experience to know how to balance code brevity vs. acceptable repetition.
For example, in this original code, thefind reflection line is almost identical:
// Find original horizontal reflection line public findOriginalHorizontal():number { let retval = 0; let reflectline = 0; let mapSeen:string[] = []; mapSeen.push(this.map[0]); for(let i=1; i<this.map.length; i++) { mapSeen.push(this.map[i]); if(reflectline == 0 && (this.map[i] == mapSeen[i-1])) { reflectline = i; continue; } if(reflectline>0) { let diff = i - reflectline; if(reflectline-diff-1 >= 0) { if(mapSeen[reflectline-diff-1] != this.map[i]) { reflectline = 0; } } } } return reflectline; } // Find original vertical reflection line public findOriginalVertical():number { let retval = 0; let reflectline = 0; let mapSeen:string[] = []; const rotatedMap = rotate(this.map); mapSeen.push(rotatedMap[0]); for(let i=1; i<rotatedMap.length; i++) { mapSeen.push(rotatedMap[i]); if(reflectline == 0 && (rotatedMap[i] == mapSeen[i-1])) { reflectline = i; continue; } if(reflectline>0) { let diff = i - reflectline; if(reflectline-diff-1 >= 0) { if(mapSeen[reflectline-diff-1] != rotatedMap[i]) { reflectline = 0; } } } } return reflectline; }With a DRY code refactoring effort, that can be replaced by the following:
// Find original reflection line public findOriginalReflection(isVertical:boolean):number { let retval = 0; let reflectline = 0; let mapSeen:string[] = []; const targetMap = isVertical ? rotate(this.map) : this.map; mapSeen.push(targetMap[0]); for(let i=1; i<targetMap.length; i++) { mapSeen.push(targetMap[i]); if(reflectline == 0 && (targetMap[i] == mapSeen[i-1])) { reflectline = i; continue; } if(reflectline>0) { let diff = i - reflectline; if(reflectline-diff-1 >= 0) { if(mapSeen[reflectline-diff-1] != targetMap[i]) { reflectline = 0; } } } } return reflectline; }Developers under pressure to produce code might find it easy to use direct values or add a new field to a class where data behaves differently. This refactoring pattern aims to rationalize how the data is organized, keep like with like and remove hardcoded references wherever possible.
A common example of this is shown here, where a repeated string, such as a URL, is reused throughout the code:
public static async Task<FileResponse> fileInfo(string token, bool formatted) { var client = new HttpClient(); var url = $"https://waifuvault.moe/rest/{token}?formatted={formatted}"; var infoResponse = await client.GetAsync(url); checkError(infoResponse,false); var infoResponseData = await infoResponse.Content.ReadAsStringAsync(); return JsonSerializer.Deserialize<FileResponse>(infoResponseData) ?? new FileResponse(); } public static async Task<bool> deleteFile(string token) { var client = new HttpClient(); var url = $"https://waifuvault.moe/rest/{token}"; var urlResponse = await client.DeleteAsync(url); checkError(urlResponse,false); var urlResponseData = await urlResponse.Content.ReadAsStringAsync(); return urlResponseData == "true"; }Add astring constant, and see the difference:
public static const string baseURL = "https://waifuvault.moe/rest"; public static async Task<FileResponse> fileInfo(string token, bool formatted) { var client = new HttpClient(); var url = $"{baseURL}/{token}?formatted={formatted}"; var infoResponse = await client.GetAsync(url); checkError(infoResponse,false); var infoResponseData = await infoResponse.Content.ReadAsStringAsync(); return JsonSerializer.Deserialize<FileResponse>(infoResponseData) ?? new FileResponse(); } public static async Task<bool> deleteFile(string token) { var client = new HttpClient(); var url = $"{baseURL}/{token}"; var urlResponse = await client.DeleteAsync(url); checkError(urlResponse,false); var urlResponseData = await urlResponse.Content.ReadAsStringAsync(); return urlResponseData == "true"; }This code refactoring pattern is similar to the data organization described previously but focuses on the methods and features. Developers pushing to a deadline may create lots of small classes that do almost nothing or a huge monolith class that does everything. Both of those are equally bad.
The goal here is to rationalize the features and objects you use into the optimal amount of objects and methods and not go heavily in either direction.
The following is a good example of this refactoring pattern, a class used for multiple purposes that ends up becoming unclear:
public class Files{ public string? filename { get; set; } public string? url { get; set; } public byte[]? buffer { get; set; } public string? expires { get; set; } public string? password { get; set; } public bool? hidefilename { get; set; } public string? token { get; set; } public bool? fileprotected { get; set; } public string? retentionPeriod { get; set; }}We can break that up into two classes whose functions are clear:
public class FileUpload{ public string? filename { get; set; } public string? url { get; set; } public byte[]? buffer { get; set; } public string? expires { get; set; } public string? password { get; set; } public bool? hidefilename { get; set; }}public class FileResponse{ public string? token { get; set; } public string? url { get; set; } public bool? fileprotected { get; set; } public string? retentionPeriod { get; set; }}In some ways, this refactoring method is the opposite of simplification. Here, the goal is to abstract parts of the code from the implementation details, enabling simpler future expansion.
A good example of this is to convert built-in functions into a modularized approach and enable the creation of new modules to add future functions.
The most common example of abstraction is using the service-repository pattern to handle data access. With this pattern, a developer can change the data back end without rewriting the application code.
As with all things in life, refactoring can be taken too far. Each of these coderefactoring techniques can end up making the code brittle and hard to extend in the future:
Understanding where the line is between well-refactored code and brittle code can be an art form. The best advice is to refactor what feels right to you and don't force it to where it ends up making the code harder to extend in the future.
Walker Aldridge is a programmer with 40 years of experience in multiple languages and remote programming. He is also an experienced systems admin and infosec blue team member with interest in retrocomputing.
Regulation now drives software architecture. See how privacy engineering, policy-as-code and region-aware cloud design turn ...
Learn how to plan and manage a SaaS implementation, including delivery models, requirements, data migration, testing, risk ...
Learn how memory-safe languages prevent vulnerabilities by enforcing safe memory use at design time, helping IT leaders modernize...
Kiro emerges as a significant alternative to GitHub Copilot agents, while AWS AgentCore updates square off against Agent 365 in ...
The new agents are autonomous, capable of performing multiple tasks simultaneously, and can complete their work with minimal ...
QA teams play an important role in ensuring quality and performance. To be as effective as possible, organizations need to be ...
Q4 cloud infrastructure service revenues reach $119.1 billion, bringing the 2025 total to $419 billion. See how much market share...
Will $5 trillion in AI infrastructure investment be enough? Cloud providers facing that question must also yield a return, ...
As IT leaders aggressively re-allocate capital to fund new AI initiatives, repatriation offers both savings and greater control, ...
MLSecOps ensures AI security by automating controls, fostering collaboration and addressing vulnerabilities early in the ...
Next-generation firewalls are critical tools in today's evolving threat landscape. Learn how to evaluate and select an NGFW that ...
Elevating cybersecurity to a state of resilience requires a security team to adapt and strengthen defenses. The result should be ...
Compare Datadog vs. New Relic capabilities including alerts, log management, incident management and more. Learn which tool is ...
Many organizations struggle to manage their vast collection of AWS accounts, but Control Tower can help. The service automates ...
There are several important variables within the Amazon EKS pricing model. Dig into the numbers to ensure you deploy the service ...