Yealink Releases New T5 Business Phone Series

Yealink Releases New T5 Business Phone Series

Feb 24, 2020 by Sam Taylor

The Yealink T5 Business Phone Series – Redefining Next-Gen Personal Collaboration Experience

Yealink, the global leading provider of enterprise communication and collaboration solutions, recently announced the release of the new T5 Business Phone Series and VP59 Flagship Smart Video Phone. Being responsive to changes and demands in the marketplace, Yealink has designed and developed its novel T5 Series, the most advanced IP desktop phone portfolio in the industry. With the leading technology, the multifunctional T5 Business Phone Series provides the best personalized collaboration experience and great flexibility to accommodate the needs of the market.

In T5 Business Phone Series, seven phone models are introduced to cover different demands. Ergonomic design with larger LCD displays, the Yealink T5 Business Phone Series is specially developed for users to optimize visual experience, by utilizing the fully adjustable HD screen based on varied lightings, heights and sitting positions. This flexible function enables users to always maintain the best angle of view.

With the strong support of exclusive Yealink Acoustic Shield technology, a virtual voice “shield” is embedded in each model of T5 Business Phone Series.  Yealink Acoustic Shield technology uses multiple microphones to create the virtual “shield” between the speaker and the outside sound source. Once enabled, it intelligently blocks or mutes sounds from outside the “shield” so that the person on the other end hears you only and follows you clearly. This technology dramatically reduces frustration and improves productivity.

Featuring the advanced built-in Bluetooth and Wi-Fi, the high technology in the Yealink T5 Business Phone Series creates the industry-leading connectivity and scalability for its users to explore.  T5 Series effortlessly supports wireless communication and connection through wireless headsets and mobile phones in synch. Additionally, it is ready for seamless switching of call between desktop phone and cordless DECT headset via a corded-cordless phone configuration. 

The Yealink T5 Business Phone Series is redefining Next-Gen personal collaboration experience. The value of a desktop phone is redefined.  More possibilities to discover, to explore and to redefine.

About Yealink

Founded in 2001, Yealink (Stock Code: 300628) is a leading global provider of enterprise communication and collaboration solutions, offering video conferencing service to worldwide enterprises. Focusing on research and development, Yealink also insists on innovation and creation. With the outstanding technical patents of cloud computing, audio, video and image processing technology, Yealink has built up a panoramic collaboration solution of audio and video conferencing by merging its cloud services with a series of endpoints products. As one of the best providers in more than 140 countries and regions including the US, the UK and Australia, Yealink ranks No.1 in the global market share of SIP phone shipments (Global IP Desktop Phone Growth Excellence Leadership Award Report, Frost & Sullivan, 2018).

For more information, please visit: www.yealink.com.

CVE-2019-19781 – Vulnerability in Citrix Application Delivery Controller

Feb 11, 2020 by Sam Taylor

Description of Problem

A vulnerability has been identified in Citrix Application Delivery Controller (ADC) formerly known as NetScaler ADC and Citrix Gateway formerly known as NetScaler Gateway that, if exploited, could allow an unauthenticated attacker to perform arbitrary code execution.

The scope of this vulnerability includes Citrix ADC and Citrix Gateway Virtual Appliances (VPX) hosted on any of Citrix Hypervisor (formerly XenServer), ESX, Hyper-V, KVM, Azure, AWS, GCP or on a Citrix ADC Service Delivery Appliance (SDX).

Further investigation by Citrix has shown that this issue also affects certain deployments of Citrix SD-WAN, specifically Citrix SD-WAN WANOP edition. Citrix SD-WAN WANOP edition packages Citrix ADC as a load balancer thus resulting in the affected status.

The vulnerability has been assigned the following CVE number:

• CVE-2019-19781 : Vulnerability in Citrix Application Delivery Controller, Citrix Gateway and Citrix SD-WAN WANOP appliance leading to arbitrary code execution

The vulnerability affects the following supported product versions on all supported platforms:

• Citrix ADC and Citrix Gateway version 13.0 all supported builds before 13.0.47.24

• NetScaler ADC and NetScaler Gateway version 12.1 all supported builds before 12.1.55.18

• NetScaler ADC and NetScaler Gateway version 12.0 all supported builds before 12.0.63.13

• NetScaler ADC and NetScaler Gateway version 11.1 all supported builds before 11.1.63.15

• NetScaler ADC and NetScaler Gateway version 10.5 all supported builds before 10.5.70.12

• Citrix SD-WAN WANOP appliance models 4000-WO, 4100-WO, 5000-WO, and 5100-WO all supported software release builds before 10.2.6b and 11.0.3b

What Customers Should Do

Exploits of this issue on unmitigated appliances have been observed in the wild. Citrix strongly urges affected customers to immediately upgrade to a fixed build OR apply the provided mitigation which applies equally to Citrix ADC, Citrix Gateway and Citrix SD-WAN WANOP deployments. Customers who have chosen to immediately apply the mitigation should then upgrade all of their vulnerable appliances to a fixed build of the appliance at their earliest schedule. Subscribe to bulletin alerts at https://support.citrix.com/user/alerts to be notified when the new fixes are available.

The following knowledge base article contains the steps to deploy a responder policy to mitigate the issue in the interim until the system has been updated to a fixed build: CTX267679 – Mitigation steps for CVE-2019-19781

Upon application of the mitigation steps, customers may then verify correctness using the tool published here: CTX269180 – CVE-2019-19781 – Verification Tool

In Citrix ADC and Citrix Gateway Release “12.1 build 50.28”, an issue exists that affects responder and rewrite policies causing them not to process the packets that matched policy rules. This issue was resolved in “12.1 build 50.28/31” after which the mitigation steps, if applied, will be effective.  However, Citrix recommends that customers using these builds now update to “12.1 build 55.18”, or later, where CVE-2019-19781 issue is already addressed.

Customers on “12.1 build 50.28” who wish to defer updating to “12.1 build 55.18” or later should choose one from the following two options for the mitigation steps to function as intended:

1. Update to the refreshed “12.1 build 50.28/50.31” or later and apply the mitigation steps, OR

2. Apply the mitigation steps towards protecting the management interface as published in CTX267679. This will mitigate attacks, not just on the management interface but on ALL interfaces including Gateway and AAA virtual IPs

Fixed builds have been released across all supported versions of Citrix ADC and Citrix Gateway. Fixed builds have also been released for Citrix SD-WAN WANOP for the applicable appliance models. Citrix strongly recommends that customers install these updates at their earliest schedule. The fixed builds can be downloaded from https://www.citrix.com/downloads/citrix-adc/ and https://www.citrix.com/downloads/citrix-gateway/ and https://www.citrix.com/downloads/citrix-sd-wan/


Customers who have upgraded to fixed builds do not need to retain the mitigation described in CTX267679.

 

Fix Timelines

Citrix has released fixes in the form of refresh builds across all supported versions of Citrix ADC, Citrix Gateway, and applicable appliance models of Citrix SD-WAN WANOP. Please refer to the table below for the release dates.

 

Acknowledgements

Citrix thanks Mikhail Klyuchnikov of Positive Technologies, and Gianlorenzo Cipparrone and Miguel Gonzalez of Paddy Power Betfair plc for working with us to protect Citrix customers.

What Citrix Is Doing

Citrix is notifying customers and channel partners about this potential security issue. This article is also available from the Citrix Knowledge Center at  http://support.citrix.com/.

Obtaining Support on This Issue

If you require technical assistance with this issue, please contact Citrix Technical Support. Contact details for Citrix Technical Support are available at  https://www.citrix.com/support/open-a-support-case.html

Reporting Security Vulnerabilities

Citrix welcomes input regarding the security of its products and considers any and all potential vulnerabilities seriously. For guidance on how to report security-related issues to Citrix, please see the following document: CTX081743 – Reporting Security Issues to Citrix

Changelog

Splunk 2020 Predictions

Jan 7, 2020 by Sam Taylor

Around the turn of each new year, we start to see predictions issued from media experts, analysts and key players in various industries. I love this stuff, particularly predictions around technology, which is driving so much change in our work and personal lives. I know there’s sometimes a temptation to see these predictions as Christmas catalogs of the new toys that will be coming, but I think a better way to view them, especially as a leader in a tech company, is as guides for professional development. Not a catalog, but a curriculum.

We’re undergoing constant transformation — at Splunk, we’re generally tackling several transformations at a time — but too often, organizations view transformation as something external: upgrading infrastructure or shifting to the cloud, installing a new ERP or CRM tool. Sprinkling in some magic AI dust. Or, like a new set of clothes: We’re all dressed up, but still the same people underneath. 

I think that misses a key point of transformation; regardless of what tools or technology is involved, a “transformation” doesn’t just change your toolset. It changes the how, and sometimes the why, of your business. It transforms how you operate. It transforms you.

Splunk’s Look at the Year(s) Ahead

That’s what came to mind as I was reading Splunk’s new 2020 Predictions report. This year’s edition balances exciting opportunities with uncomfortable warnings, both of which are necessary for any look into the future.

Filed under “Can’t wait for that”: 

  • 5G is probably the most exciting change, and one that will affect many organizations soonest. As the 5G rollouts begin (expect it to be slow and patchy at first), we’ll start to see new devices, new efficiencies and entirely new business models emerge. 
  • Augmented and virtual reality have largely been the domain of the gaming world. However, meaningful and transformative business applications are beginning to take off in medical and industrial settings, as well as in retail. The possibilities for better, more accessible medical care, safer and more reliable industrial operations and currently unimagined retail experiences are spine-tingling. As exciting as the gaming implications are, I think that we’ll see much more impact from the use of AR/VR in business.
  • Natural language processing is making it easier to apply artificial intelligence to everything from financial risk to the talent recruitment process. As with most technologies, the trick here is in carefully considered application of these advances. 

On the “Must watch out for that” side:

  • Deepfakes are a disturbing development that threaten new levels of fake news, and also challenge CISOs in the fight against social engineering attacks. It’s one thing to be alert to suspicious emails. But when you’re confident that you recognize the voice on the phone or the image in a video, it adds a whole new layer of complexity and misdirection.
  • Infrastructure attacks: Coming into an election year, there’s an awareness of the dangers of hacking and manipulation, but the vulnerability of critical infrastructure is another issue, one that ransomware attacks only begin to illustrate.

Tools exist to mitigate these threats, from the data-driven technologies that spot digital manipulations or trace the bot armies behind coordinated disinformation attacks to threat intelligence tools like the MITRE ATT&CK framework, which is being adopted by SOCs and security vendors alike. It’s a great example of the power of data and sharing information to improve security for all.

Change With the Times

As a leader trying to drive Splunk forward, I have to look at what’s coming and think, “How will this transform my team? How will we have to change to be successful?” I encourage everyone to think about how the coming technologies will change our lives — and to optimize for likely futures. Business leaders will need greater data literacy and an ability to talk to, and lead, technical team members. IT leaders will continue to need business and communication skills as they procure and manage more technology than they build themselves. We need to learn to manage complex tech tools, rather than be mystified by them, because the human interface will remain crucial. 

There are still some leaders who prefer to “trust their gut” rather than be “data-driven.” I always think that this is a false dichotomy. To ignore the evidence of data is foolish, but data generally only informs decisions — it doesn’t usually make them. An algorithm can mine inhuman amounts of data and find patterns. Software can extract that insight and render an elegant, comprehensible visual. The ability to ask the right questions upfront, and decide how to act once the insights surface, will remain human talents. It’s the combination of instinct and data together that will continue to drive the best decisions.

This year’s Splunk Predictions offer several great ways to assess how the future is changing and to inspire thought on how we can change our organizations and ourselves to thrive.

Tips and Tricks with MS SQL (Part 8)

Dec 23, 2019 by Sam Taylor

Tame Your Log Files!

By default, the recovery model for database backups on Microsoft‘s SQL Server is set to “full”. This could cause issues for the uninitiated. If backups aren’t fully understood and managed correctly it could cause log files to bloat in size and get out of control. With the “full” recovery model, you get the advantage of flexibility in point-in-time restores and high-availability scenarios, but this also means having to run separate backups for log files in addition to the data files.

 

To keep things simple, we’ll look at the “simple” recovery model. When you run backups, you’re only dealing with data backups whether it’s a full or differential backup. The log file, which holds transactions between full backups, won’t be something you need to concern yourself with unless you’re doing advanced disaster recovery, like database mirroring, log shipping, or high-availability setups.

 

When dealing with a “full” recovery model, you’re not only in charge of backing up the data files, but the log files as well. In a healthy server configuration, log files are much smaller than data files. This means you can run log backups every 15 minutes or every hour without much IO activity as a full or differential backup. This is where you get the point-in-time flexibility. This is also where I often see a lot of issues…

 

Log files run astray. A new database might be created or migrated, and the default recovery model is still in “full” recovery mode. A server that relies on a simpler setup might not catch this nor have log backups in place. This means the log file will start growing exponentially, towering over the data file size, and creating hordes of VLFs (look out for a future post about these). I’ve seen a lot of administrators not know how to control this and resort to shrinking databases or files – which is just something you should never do unless your intentions are data corruption and breaking things.

 

My advice here is keep it simple. If you understand how to restore a full backup, differential backups, and log backups including which order they should be restored in and when to use “norecovery” flags,  or have third-party software doing this for you, you’re all set. If you don’t, I would suggest setting up log backups to run at regular and short interval (15 mins – 1 hour) as a precaution and changing the database recovery models to “simple”. This can keep you protected when accidentally pulling in a database that defaulted to the “full” recovery model and having its log file eat the entire disk.

 

Pro Tip: Changing your “model” database’s recovery model will determine the default recovery model used for all new databases you create.

 

Any questions, comments, or feedback are appreciated! Leave a comment or send me an email to aturika@crossrealms.com for any SQL Server questions you might have!

3CX Phone System on Campus

Dec 23, 2019 by Sam Taylor

Higher Learning at a Lower Cost​

Universities are places where ideas can be communicated freely. What better way to do this, than through a unified communications system like 3CX. As the central communications system on-campus, 3CX offers multiple opportunities to encourage and facilitate learning. It can connect staff members and students with benefits for everyone, including free audio/video calls, low-cost external calls, access to all areas, integrations with other used systems, and more. Let’s examine this use case in more detail.

Affordable Communication on a Shoe-string Budget​

3CX is the ideal tool for universities that require all the advanced features of a unified communication system, without the hefty price tag. Apart from a PBX server, 3CX requires no additional hardware to be installed, making it easily accessible to your staff. The only requirement is a PC with a modern web browser. This simplifies administration, drastically reduces support requests and is a more cost-effective solution overall. What’s more, 3CX provides built-in support for a multitude of IP phones and SIP devices, making it easy to choose a desk phone or SIP device that suits everyone’s budget.

Keep in Contact, at the Lecture Theatre, Dorm or While Roaming

Add the 3CX Android and iOS apps to the mix, and your staff can talk, chat and access a university-wide shared phonebook/directory from their smartphones – wherever they may be. When calling on the move, the app reconnects calls automatically through available WiFi or 4G networks. They can also use Chat to exchange messages and documents while at the campus or anywhere else. 3CX can really empower you to do more with your devices!

Extend Your Reach to Facilitate Teamwork

Universities can typically span multiple buildings and areas, which makes setting up difficult under a single communications solution. Not so with 3CX, as it can unify all your remote offices and dorms using bridges and SBCs (Session Border Controllers), to allow your personnel and students to communicate, irrespective of their location. Academic staff and students can also use WebMeeting at no extra cost, to join on-line video meetings for study groups, or webinar sessions with teaching assistants, lab technicians, and so on.

Never Alone. Integrate & Automate

Traditionally a phone system functions in isolation, with little or no ability to interface with other university systems and services. On the contrary, 3CX includes built-in integration options with Office 365, databases, CRMs and other network-enabled systems.

As a quick example, consider a 3CX script-based IVR (Interactive Voice Response) menu, that services students’ course enrollment requests. The student calls the IVR, enters the ID for the chosen course and 3CX will deliver the student’s telephone number and course selection to the university’s course management system. What’s more, by using the Call Flow Designer (CFD), you can create call flows to automate your procedures, from course billing to announcements via text-to-speech. And CFD does not require any programming knowledge!

Keep in Control of Access & Security

Universities need to maintain controlled and secure access to areas like offices, labs, and dorms. 3CX supports popular video door phone devices which can be used with 3CX. Through this, you can attend to visitors seeking entry, or even control activity and access to specific areas – doing away with employing costly security personnel. You can also use PA systems connected to 3CX, to perform announcements in university common areas, classrooms and halls.

No Master’s Degree Required to Administer

With 3CX, administrators have freedom of choice! Install with ease on LinuxWindowsRaspberry Pi and on popular cloud providers like Google CloudAzure, and AWS. Not only is it easy to install, but easy to manage too. Keep your data safe by securing and managing your backups, recordings and voicemails with flexible options, on local or remote storage (FTP, SSH and SMB). What’s more, administrators can use the built-in Instance Manager to remotely monitor, manage and update a Linux PBX.

In Conclusion​

Universities are by definition communities of teachers and scholars. 3CX bridges the communication gap between these communities facilitates learning and strengthens relationships. It is the perfect fit for organizations that value communication as the primary means of education. And it comes with an affordable price tag, to boot!

Tips and Tricks with MS SQL (Part 7)

Dec 6, 2019 by Sam Taylor

Quickly See if Ad Hoc Optimization Benefits Your Workloads​

A single setting frequently left disabled can make a huge performance impact and free up resources. The setting is a system-wide setting that allows Microsoft SQL Server to optimize it’s processes for “Ad Hoc” workloads. Most SQL Servers I come across that rely heavily upon ETL (Extract – Transform – Load) workloads for their day-to-day would benefit from enabling “Optmize for AdHod Workloads” but often don’t have the setting enabled.

If you perform a lot of ETL workloads and want to know if enabling this option will benefit you, I’ll make it simple. First we need to determine the percentage of your cache plan that runs Ad Hoc. To do so just run the following T-SQL script in SQL Server Management Studio:

SELECT AdHoc_Plan_MB, Total_Cache_MB,

        AdHoc_Plan_MB*100.0 / Total_Cache_MB AS ‘AdHoc %’

FROM (

SELECT SUM(CASE

            WHEN objtype = ‘adhoc’

            THEN size_in_bytes

            ELSE 0 END) / 1048576.0 AdHoc_Plan_MB,

        SUM(size_in_bytes) / 1048576.0 Total_Cache_MB

FROM sys.dm_exec_cached_plans) T

After running this, you’ll see a column labelled “AdHoc %” with a value. As a general rule of thumb, I prefer to enable optmizing for Ad Hoc workloads when these values are between 20-30%. These numbers will change depending on the last time the server was reset so it’s best to check after the server has been running for at least a week or so. Changes only go into affect for new cached plans created. For the impatient, a quicker way to see the results of the change require restarting SQL Services to clear the plan cache.

Under extremely rare circumstanes this could actually hinder performance. If that’s the case just disable Ad Hoc and continue on as you were before. As always, feel free to ask me directly so I can help. There isn’t any harm in testing if this benefits your environment or not. To enable optmiziation, right click the SQL Instance from SQL Server Management Studio’s Object Explorer à Properties à Advanced à Change “Optmize for Ad Hoc Workloads” to “True” à Click “Apply”. From there run the query “RECONFIGURE” to put the change into action.

Any questions, comments, or feedback are appreciated! Leave a comment or send me an email to aturika@crossrealms.com for any SQL Server questions you might have!

Tips and Tricks with MS SQL (Part 6)

Dec 6, 2019 by Sam Taylor

Increase the Number of TEMPDB Data Files

If you’re having issues with queries that contain insert/update statements, temp tables, table variables, calculations, or grouping or sorting of data, it’s possible you’re seeing some contention within the TEMPDB data files. A lot of Microsoft SQL servers I come across only have a single TEMPDB data file. That’s not a Best Practice according to Microsoft. If you have performance issues when the aforementioned queries run it’s a good idea to check on the number of TEMPDB files you have because often times just one isn’t enough.

 

SQL Server places certain locks on databases, including TEMPDB, when it processes queries. So, if you have 12 different databases all running queries with complex sorting algorithms and processing calculations of large datasets, all that work is first done in TEMPDB. A single file for TEMPDB doesn’t only hurt performance and efficiency but can also slow down other processes running alongside it by hogging resources and/or increased wait times. Luckily, the resolution is super simple if you’re in this situation.

 

Increase the number of data files in TEMPDB to maximize disk bandwidth and reduce contention. As Microsoft recommends, if the number of logical processors is less than or equal to 8 – that’s the number of data files you’ll want. If the number of logical processors is greater than 8, just use 8 data files. If you’ve got more than 8 logical processors and still experience contention, increase the data files by multiples of 4 while not exceeding the number of logical processors. If you still have contention issues, consider looking at your workload, code, or hardware to see where improvements can be mode.

 

PRO TIP: When you increase the number of your TEMPDB data files (on its separate drive… remember?) take this time to pre-grow your files. You’ll want to pre-grow all the data files equally and enough to take up the entire disk’s space (accounting for TEMPDB’s log file).

 

Any questions, comments, or feedback are appreciated! Leave a comment or send me an email to aturika@crossrealms.com for any SQL Server questions you might have!

Tips and Tricks with MS SQL (Part 5)

Dec 6, 2019 by Sam Taylor

Separate Your File Types

It’s too common and important of an occurrence to not mention the need for file separation in this series. If you’re running Microsoft SQL Server of any version, it’s important you separate your file types to different logical or physical locations. “Data” files, “Log” files, and “TEMPDB” files shouldn’t ever live in the same logical drive. This has a big impact on performance and makes troubleshooting issues much harder to isolate when it comes to finding read/write contention as a suspect.

It’s understandable, the quick need of a SQL Server pops up and you install a Development Edition or Express Edition in 10 minutes leaving file types to their default locations. However, once this system becomes a production server, you better know how to relocate these files to new locations or do it right the first time around. It’ll be easier earlier on rather than after the data grows and needs a bigger maintenance window to move.

To keep with Microsoft Best Practices, you can use a drive naming convention similar to what I’ve listed below to help remember where to place your files. If you’re fortunate enough to have physical drive separation, all the power to you. For most servers I see in this situation, it’s best to start with logical separation at a minimum to yield some powerful results.

Filetype Mapping:

– C:\ – System Databases (default MS SQL installation location)

– D:\ – Data Files

– L:\ – Log Files

– T:\ – TEMPDB Files

– B:\ – Backup Files (with redundancy of course…)

Any questions, comments, or feedback are appreciated! Leave a comment or send me an email to aturika@crossrealms.com for any SQL Server questions you might have!

Tips and Tricks with MS SQL (Part 4)

Dec 6, 2019 by Sam Taylor

Don't Forget to Enable "IFI" on New Installations​

Instant File Initialization (IFI) is a simple feature with performance benefits often left behind on installations of SQL Server that have seen their share of upgrades or migrations. If it wasn’t available in previous versions of Windows Server or Microsoft SQL Server, there’s a good chance someone unfamiliar with its purpose didn’t enable it during an upgrade. Why risk enabling a new feature to a system that’s been stable and passed the test of time? During installations of SQL Server 2016 onwards, this presents itself as the “Grant Perform Volume Maintenance Task” checkbox SQL Server asks you to check on or leave off (1). It can be enabled in older SQL versions as well, though by different means.

The benefits of enabling this means being able to write data to disk faster. Without IFI enabled, anytime SQL Server needs to write to disk it first must zero out previously deleted files and reclaim any space on the disk that was once used. This happens anytime a new database is created, data or log files are added, database size is increased (including autogrowth events), and when restoring a database. Enabling the IFI feature can bypass this “overwriting the disk with zeros” process used in the Windows file initialization process. The resulting benefits to disk performance compound as data grows and especially when non-solid-state media is used.

An analogy to what’s happening here is when you’re formatting a USB thumb drive and being presented with “Perform a Quick Format” checkbox. This would be like enabling IFI where Windows basically just claims all the diskspace quickly and lets you go about your day. Without the Quick Format, Windows goes through and writes zeros to each sector of the drive (which also reveals bad sectors – but unrelated to SQL’s IFI usage) which takes much longer. It’s essentially writing enough to cover all available space, hence taking longer. You’ve probably noticed these differences in formatting speeds before. The performance benefit of Quick Format is like SQL Server with IFI enabled. It’s becomes more evident as the size of storage or data increases.

Note (1) : If you’re using a SQL Domain User Account as a Service Logon Account instead of the service account (NT Service\MSSQLSERVER) SQL Server defaults to, you’ll need to grant the account “Perform Volume Maintenance Tasks” separately under the “Local Policies”. Double check your SQL service account has this right granted to be safe. For instructions on granting permissions, you can follow Microsoft’s documentation here.

If you want to know other ways to enable IFI on your server without the re-installation SQL or want to know how to check if IFI is enabled, feel free to reach out.  Any questions, comments, or feedback are appreciated! Leave a comment or send me an email to aturika@crossrealms.com for any SQL Server questions you might have!

Tips and Tricks with MS SQL (Part 3)

Dec 6, 2019 by Sam Taylor

Edit Change Database Auto-Growth from Percent-Based to Fixed-Size Growth​

  • Edit
 
 

In the ideal world all the Microsoft SQL Servers I came across would have their databases pre-grown to account for future growth and re-evaluate their needs periodically. Unfortunately, this is almost never the case. Instead, these databases rely on SQL’s autogrowth feature to expand their data files and log files as needed. The problem is the default is set to autogrow data files by 1MB(*) and log files by 10%(*).

Since this was such a big issue with performance, Microsoft made some changes in SQL Server 2016 onward, where the data files and log files will default to a growth of 64MB each. If your server is still using the 1MB autogrowth for data and 10% autogrowth for logs, consider using Microsoft’s new defaults and bump it up to at least 64MB.

Growing a data file by 1MB increments means the server must do extra work. If it needs to grow 100MB – it must send over 100 requests to the server to grow 1MB, add data, then ask the server to grow again and repeat. Imagine how bad this gets for databases growing by gigabytes a day! This is even worse if growing by a percentage. This means the server has to do some computing first before it can grow. Growing 10% of 100MB is easy to account for but as the log file grows it can quickly get out of hand and runaway bloating your storage system while adding CPU overhead as an extra kick in the rear!

The change is luckily very simple. Right-click one of the user databases using SQL Server Management Studio and select “Properties”. From there click on the “Files” page. Next expand the “…” button near the “ROWS” cell and change this to 64MB or greater (depending on how much room you have to work with and growth expected). Do the same for the “LOG” file type. That’s it! You’re done and gave your server some well needed breathing room!

Any questions, comments, or feedback are appreciated! Feel free to reach out to aturika@crossrealms.com for any SQL Server questions you might have!