Pakistan's First Oracle Blog

Subscribe to Pakistan's First Oracle Blog feed
Blog By Fahd Mirza Chughtai
Updated: 12 hours 12 min ago

Checklist While Troubleshooting Workload Errors in Kubernetes

14 hours 57 min ago

 Following is the checklist while troubleshooting workload/application errors in Kubernetes:

1- First check how many nodes are there

2- What namespaces are present

3- In which namespace , the faulty application is

4- Now check faulty app belongs to which deployment

5- Now check which replicaset (if any) is party of that deployment

6- Then check which pods are part of that replicaset

7- Then check which services are part of that namespace

8- Then check which service correspond to the deployment where our faulty application is 

9- Then make sure label selectors in deployment to pod template are correct

10- Then ensure label selector in service to deployment are correct.

11- Then check that servicename if referred in any deployment is correct. For example, webserver pod is referring to database host (which will be the servicename of database) in env of pod template is correct.

12- Then check that ports are correct in clusterIP or nodeport services. 

13- Check if the status of pod is running

14- check logs of pods and containers

I hope that helps and feel free to add any step or thought in the comments. Thanks.

Categories: DBA Blogs

Different Ways to Access Oracle Cloud Infrastructure

Thu, 2020-08-06 09:00

This is a quick jot down of different ways you can access the ever-improving Oracle Cloud Infrastructure (OCI). Most types of Oracle Cloud Infrastructure resources have a unique, Oracle-assigned identifier called an Oracle Cloud ID (OCID).

You can access Oracle Cloud Infrastructure using the Console (a browser-based interface) or the REST API. To access the Console, you must use a supported browser. You can go to the sign-in page. You will be prompted to enter your cloud tenant, your user name, and your password. The Oracle Cloud Infrastructure APIs are typical REST APIs that use HTTPS requests and responses.

All Oracle Cloud Infrastructure API requests must be signed for authentication purposes. All Oracle Cloud Infrastructure API requests must support HTTPS and SSL protocol TLS 1.2. Oracle Cloud Infrastructure provides a number of Software Development Kits (SDKs) and a Command Line Interface (CLI) to facilitate development of custom solutions.

Software Development Kits (SDKs) Build and deploy apps that integrate with Oracle Cloud Infrastructure services. Each SDK provides the tools you need to develop an app, including code samples and documentation to create, test, and troubleshoot. In addition, if you want to contribute to the development of the SDKs, they are all open source and available on GitHub.

  • SDK for Java
  • SDK for Python
  • SDK for TypeScript and JavaScript
  • SDK for .NET
  • SDK for Go
  • SDK for Ruby

Command Line Interface (CLI) The CLI provides the same core capabilities as the Oracle Cloud Infrastructure Console and provides additional commands that can extend the Console's functionality. The CLI is convenient for developers or anyone who prefers the command line to a GUI.

Categories: DBA Blogs

Oracle 11g on AWS RDS Will Be Force Upgraded in Coming Months

Thu, 2020-08-06 00:51
To make a long story short: If you have Oracle 11g running on AWS RDS, then start thinking, planning, and implementing it's upgrade to a later version, preferably Oracle 19c. 

This is what AWS has to say about this:

Oracle has announced the end date of support for Oracle Database version 11.2.0.4 as December 31, 2020, after which Oracle Support will no longer release Critical Patch Updates for this database version. Amazon RDS for Oracle will end support for Oracle Database version 11.2.0.4 Standard Edition 1 (SE1) for License Included (LI) model on October 31, 2020. For the Bring Your Own License (BYOL) model, Amazon RDS for Oracle will end the support for Oracle Database version 11.2.0.4 for all editions on December 31, 2020. All 11.2.0.4 SE1 LI instances will be automatically upgraded to 19c starting on November 1, 2020. Likewise, the 11.2.0.4 BYOL instances will be automatically upgraded to 19c starting on January 1, 2021. We highly recommend you upgrade your existing Amazon RDS for Oracle 11.2.0.4 DB instances and validate your applications before the automatic upgrades begin. 

The bit which probably would apply to most of enterprise customers who are running Oracle 11g with BYOL license is this:

January 1, 2021Amazon RDS for Oracle starts automatic upgrades of DB instances restored from snapshots to 19c
Instead of leaving to the last minute, its better to upgrade it sooner. There are lots of things which need to be taken into consideration for this upgrade within and outside of the database. If you need any hand with that, feel free to reach out.
Categories: DBA Blogs

Oracle Cloud's Beefed Up Security

Wed, 2020-08-05 01:23
During the first few months of the COVID-19 pandemic, many organizations expected a slowdown in their digital transformation efforts. But surprisingly, things haven't slowed down in many places instead, many enterprises accelerated their use of cloud-based services to help them manage and address emerging priorities in the new normal, which includes a distributed workforce and new digital strategies. 

More and more companies, especially those in regulated industries, want to adopt the latest cloud technologies, but they often face barriers due to strict data privacy or compliance requirements. As cloud adoption grows, we’re seeing exponential growth in cloud resources. With this we’re also seeing growth in permissions, granted to humans and workloads, to access and change those resources. This introduces potential risks, including the misuse of privileges, that can compromise your organization’s security.

To mitigate these risks, ideally every human or workload should only be granted the permissions they need, at the time they need them. This is the security best practice known as “least privilege access.” Oracle Cloud Infrastructure Identity and Access Management (IAM) lets you control who has access to your cloud resources. You can control what type of access a group of users have and to which specific resources. 

Compartments are a fundamental component of Oracle Cloud Infrastructure for organizing and isolating your cloud resources. You use them to clearly separate resources for the purposes of measuring usage and billing, access (through the use of policies), and isolation (separating the resources for one project or business unit from another). A common approach is to create a compartment for each major part of your organization. 

The first step in establishing least privilege is understanding which permissions a user has today and which have been used recently. Then, you need to understand which permissions this user is likely to need in the future, so you avoid getting into a manually intensive trial-and-error loop of assigning incremental permissions. Once you have that, you need to decide how to construct your identity and access management (IAM) policies so that you can reuse roles across several compartments.

In the Console, you view your cloud resources by compartment. This means that after you sign in to the Console, you'll choose which compartment to work in (there's a list of the compartments you have access to on the left side of the page). Notice that compartments can be nested inside other compartments. The page will update to show that compartment's resources that are within the current region. If there are none, or if you don't have access to the resource in that compartment, you'll see a message.

This experience is different when you're viewing the lists of users, groups, dynamic groups, and federation providers. Those reside in the tenancy itself (the root compartment), not in an individual compartment.

As for policies, they can reside in either the tenancy or a compartment, depending on where the policy is attached. Where it's attached controls who has access to modify or delete it. 
Categories: DBA Blogs

Oracle Cloud for Existing Oracle Workloads

Mon, 2020-07-27 19:57
As the technology requirements of your business or practice grow and change over time, deploying business-critical applications can increase complexity and overhead substantially. This is where Oracle Cloud can assist the organization in an optimum and cost effective way.


To help manage this ever-growing complexity, organizations need to select a cloud solution which is similar to their existing on-prem environments. Almost all the serious enterprise outfits are running some sort of Oracle workload and it only makes sense for them to select Oracle cloud in order to leverage what they already know in a better and modern way. And they can utilize this architecture best practices to help you build and deliver great solutions.

Cost management, operational excellence, performance efficiency, reliability, and security are hallmarks of Oracle cloud plus some more. Oracle databases are already getting complex and autonomous. They are now harder to manage and that is why it only make sense to migrate them over to the Oracle cloud and let Oracle handle all the nitty gritty.

Designing and deploying a successful workload in any environment can be challenging. This is especially true as agile development and DevOps/SRE practices begin to shift responsibility for security, operations, and cost management from centralized teams to the workload owner. This transition empowers workload owners to innovate at a much higher velocity than they could achieve in a traditional data center, but it creates a broader surface area of topics that they need to understand to produce a secure, reliable, performant, and cost-effective solution.

Every company is on a unique cloud journey, but the core of Oracle is same.



Categories: DBA Blogs

ADB-ExaC@C ? What in the Heck Oracle Autonomous Database is?

Sat, 2020-07-25 23:51
ADB-ExaC@C ? I would love to see the expressions on the face of Corey Quinn when he learns about this naming convention used by Oracle for their Exadata in Cloud offering.

Since Oracle 10g, we have been hearing about self-managed, self-healing, and self-everything Oracle database. Oracle 10g was touted as self-healing one and if you have managed Oracle 7,8i,9i, this was infarct true how much pain 10g had taken away.

But 10g was far from self-managed or autonomous in other words. Autonomous means that you wouldn't have to manage anything and database would run by itself. Once you switch it on (or it could even that by itself), it would be on it's own. This wasn't the case with 10g, 11g, 12c, 18c, etc. Database administrators were still in vogue.

With everything moving over to cloud, is that still the case? Or in other words, with this autonomous band wagon of Oracle plus their cloud offerings, is autonomous database a reality now?

So what in the heck Oracle autonomous database is? Autonomous Database delivers a machine-learning driven, self-managed database capability that natively builds in Oracle’s extensive technology stack and best practices for self-driving, self-securing and self-repairing operation.

Oracle says that their Autonomous Database is completely self-managed, allowing you to focus on business innovations instead of technology and is consumed in a true pay-per-use subscription model to lower operational cost. Yes, we have heard almost similar claims with previous versions, but one main difference here is that this one is in the cloud.

Well, if you have opted for Exadata in Oracle's cloud then its true up to a great extent. Oracle Autonomous Database on Exadata Cloud@Customer (ADB-ExaC@C) is here and as Oracle would be managing it, you wouldn't have to worry about its management. But if its autonomous why would anyone including Oracle would manage it? Shouldn't it be managing itself?

So this autonomous ADB-ExaC@C provides you something Architectural Identicality which can be easily achived by anything non-autonomous. They say its elastic as it can auto scale up and down. I think AWS Aurora, GCP Big Query is doing that for some time now. Security patching, upgrades, backups, are all behind the scene and automated for this ADB-ExaC@C. I am still at loss as what really makes it autonomous here.

Don't get me wrong. I am huge fan of Exadata despite of its blood-curdling price. Putting Exadata in Cloud and offering it as a service is a great idea too as this would enable many businesses to use it. My question is simple: ADB-ExaC@C is a managed service for sure, but what makes it autonomous?
Categories: DBA Blogs

What's Different About Oracle's Cloud

Sat, 2020-07-25 23:28
Cloud infrastructure is the foundation to powering your SaaS applications. The cloud infrastructure supporting a SaaS application is the engine that provides the security, scale, and performance for your business applications. It includes the database, operating systems, servers, routers, and firewalls (and more) required to process billions of application transactions every day.


In the words of Larry Ellison, "The main economic benefit of Oracle’s Gen 2 Cloud Infrastructure is its autonomous capability, which eliminates human labor for administrative tasks and thus reduces human error. That capability is particularly important in helping prevent data theft against increasingly sophisticated, automated hacks."

But with an outdated, overly complex ERP system, the organization found it a challenge to efficiently provide financial information. For one thing, its heavily manual processes resulted in a lack of confidence in data, making it hard to drive productivity and service improvements. By insisting on zero customization of its Oracle Cloud applications, organizatons across the world ensure that regular updates are simple and that its processes are integrated and scalable. As a result, the utility has shortened its order lead times significantly, reduced customer complaints, and boosted overall customer satisfaction levels.

Oracle’s second-generation cloud offers autonomous operations that eliminate human error and provide maximum security, all while delivering truly elastic and serverless services with the highest performance—available globally both in the public cloud and your data centers.
Categories: DBA Blogs

ERP in Oracle's Cloud

Sat, 2020-07-25 23:27
Gain resilience and agility, and position yourself for growth. Oracle Fusion Cloud ERP gives you the power to adapt business models and processes quickly so you can reduce costs, sharpen forecasts, and innovate more.


Many companies have started to migrate to Cloud. Zoom selects Oracle as a cloud infrastructure provider for its core online meeting service. Zoom deploys Oracle Cloud within hours; enables millions of meeting participants within weeks.

Oracle Cloud is a Generation 2 enterprise cloud that delivers powerful compute and networking performance and includes a comprehensive portfolio of infrastructure and platform cloud services. Built from the ground up to meet the needs of mission-critical applications, Oracle Cloud supports all legacy workloads while delivering modern cloud development tools, enabling enterprises to bring their past forward as they build their future.

Oracle's generation 2 Cloud is the only one built to run Oracle Autonomous Database, the industry's first and only self-driving database. Oracle Cloud offers a comprehensive cloud computing portfolio, from application development and business analytics to data management, integration, security, artificial intelligence (AI), and blockchain. Oracle customers are using Oracle Autonomous Database to transform their businesses by redefining database management through machine learning and automation.

Reduce operational costs by up to 90% with a multimodel converged database and machine learning-based automation for full lifecycle management. Oracle Autonomous Database runs natively on Oracle Cloud Infrastructure while providing workload-optimized cloud services for transaction processing and data warehousing. Oracle Database is the market leader and ranks #1 in the 2019 Gartner Critical Capabilities for Operational Database Management Systems report.

You can protect sensitive and regulated data automatically, patch your database for security vulnerabilities, and prevent unauthorized access—all with Oracle Autonomous Database. You can detect and protect from system failures and user errors automatically and provide failover to standby databases with zero data loss. Autonomous Data Warehouse is a cloud database service optimized for analytical processing. It automatically scales compute and storage, delivers fast query performance, and requires no database administration.

Categories: DBA Blogs

Database Management in Oracle Cloud

Sat, 2020-07-25 23:27
Autonomous Data Warehouse
Oracle Autonomous Data Warehouse Cloud Service is a fully automated, high-performance, and elastic service. You will have all of the performance of market-leading Oracle Database in a fully automated environment that is tuned and optimized for data warehouse workloads.


Autonomous Transaction Processing
Oracle Autonomous Transaction Processing is a fully automated database service tuned and optimized for transaction processing or mixed workloads with the market-leading performance of Oracle Database. The service delivers a self-driving, self-securing, self-repairing database service that can instantly scale to meet demands of mission-critical applications.

Database Cloud Service: Bare Metal
The dense I/O configuration consists of a single Oracle 11g, 12c, or 18c Database instance on 2 OCPUs, with the ability to dynamically scale up to 52 OCPUs without downtime. Available storage configurations range from 5.4 to 51.2 TB of NVMe SSD local storage, with 2- and 3-way mirroring options available.

Database Cloud Service: Virtual Machine
The virtual machine configurations consists of a single Oracle 11g, 12c, or 18c Database instance. Choose from a single OCPU virtual machine with 15 GB of RAM up to a RAC-enabled virtual machine with 48 OCPUs with over 600 GB of RAM. Storage configurations range from 256 GB to 40 TB.

Exadata Cloud Service
Oracle Exadata Cloud Service enables you to run Oracle Databases in the cloud with the same extreme performance and availability experienced by thousands of organizations which have deployed Oracle Exadata on premise. Oracle Exadata Cloud Service offers a range of dedicated Exadata shapes.

Exadata Cloud@Customer
Oracle Exadata Cloud@Customer is a unique solution that delivers integrated Oracle Exadata hardware and Oracle Cloud Infrastructure software in your data center with Oracle Exadata infrastructure managed by Oracle experts. Oracle Exadata Cloud@Customer is ideal for customers who desire cloud benefits but cannot yet move their databases to the public cloud.

Oracle NoSQL Database Cloud Service
A NoSQL Database Cloud Service with on-demand throughput and storage-based provisioning that supports document, columnar, and key-value data models, all with flexible transaction guarantees.

Oracle MySQL Database Service
MySQL Database Service is a fully managed database service that enables organizations to deploy cloud native database applications using the world’s most popular open source database. It is 100% developed, managed, and supported by the MySQL Team.
Categories: DBA Blogs

ORA-1652

Mon, 2020-07-06 22:01
If you are wondering what might be the reasons to have ORA-1652, then you can use the following query to get the idea:



select sql_id,  sum(temp_space_allocated)/1024/1024

from dba_hist_active_sess_history

where sample_time between timestamp '2020-06-25 19:30:00' and timestamp '2020-06-25 20:00:00'

group by sql_id

order by 2 desc;

Also check out the control real-time monitoring of the sessions and v$tempseg_usage, maybe some other query who is using temp and rapidly filling it.

Also check this view V$TEMPSEG_USAGE which describes temporary segment usage.
Categories: DBA Blogs

FiveTran Auth Error in Connecting Oracle RDS Database

Mon, 2020-07-06 21:49
Fivetran toolpromises to reduce technical debt with scalable connectors managed from source to destination. Model your business logic in any destination using SQL, the industry standard. Change data capture delivers incremental updates for all your sources. Built with analysts in mind, our connectors allow data teams to concentrate on asking the right questions.


I was trying to connect from fivetran to Oracle database hosted on AWS RDS and was getting Auth error while connecting through ssh tunnel. I was following fivetran instructions to create ssh user, give its rights set up the keys etc but even then Auth error was appearing.

The solution is to allow fivetran user and its group in /etc/ssh/sshd_config file and restart the ssh daemon.

Only then you would be able to connect.


Categories: DBA Blogs

Oracle Cloud is Rich with Spatial

Fri, 2020-05-29 03:47
Spatial is the technology of future. Massive amount of data will be generated, cleansed and stored. Oracle's autonomous databases in Oracle's cloud are ready to take on the challenge with a bang with a solid database offering, which is time tested.

Machine learning and graph are just the elementary building blocks of this spatial offering from Oracle.

Oracle Database includes native spatial data support, rich location query and analysis, native geocoding and routing, and map visualization, to support location-enabled business intelligence applications and services.With a network data model, raster and gridded data analysis, 3D and point cloud operations, a location tracking server, and topology management, Oracle Database provides a comprehensive platform for GIS.

It’s easy for developers to add spatial capabilties to applications – with standards-based SQL and Java APIs, JSON and REST support, and integration with Database tools, Oracle Business Intelligence, and Applications. With dramatically fast spatial index and query performance, Exadata integration, and support for Database features such as partitioning, security, distributed transactions, and sharding,

Oracle Database powers the most demanding, large scale geospatial applications – from cloud-based location services to transportation, utilities, agriculture, asset managmeent, LiDAR analysis, energy and natural resouces, and planning.

With 12.2, Oracle continues to deliver the most advanced spatial and graph database platform for applications from geospatial and location services to Internet of Things, social media, and big data.

Categories: DBA Blogs

Data Encryption in Oracle Cloud

Fri, 2020-05-29 03:40
World's leading financial institutions run their mission critical databases on Oracle and the biggest concerns they have around moving their database to cloud is encryption in transit and at rest.
Oracle TDE prevents attacks from users attempting to read sensitive data from tablespace files and users attempting to read information from acquired disks or back ups by denying access to clear text data. Oracle TDE technology uses two-tier encryption key architecture to enforce clear separation of keys from encrypted data. The encryption keys for this feature are all managed by Oracle TDE. The encryption algorithm used is AES128.

Redaction is the process of censoring or obscuring part of a text for legal or security purposes. The Data Redaction feature redacts customer data in Responsys to obfuscate consumers' Personally Identifiable Information (PII) from Responsys users. 

For example, Responsys accounts may want to redact customer data such as Email Addresses and Mobile Phone Numbers in the profile list to ensure customer data is hidden from Responsys end users. Data redaction ensures that Responsys accounts are compliant with data protection regulations to keep consumers' PII or medical records (for HIPAA compliance) confidential.

It is imperative that you test your database migrations to cloud with these redaction techniques and your well architecture review must include these use cases.

Oracle has implemented a “ubiquitous encryption” program with the goal of encrypting all data, everywhere, always. For customer tenant data, we use encryption both at-rest and in-transit. The Block Volumes and Object Storage services enable at-rest data encryption by default, by using the Advanced Encryption Standard (AES) algorithm with 256-bit encryption. In-transit control plane data is encrypted by using Transport Layer Security (TLS) 1.2 or later.
Categories: DBA Blogs

MLOps and Data Mining in Oracle 19c

Fri, 2020-05-29 03:34
Machine learning operations or aka MLOps is getting quick traction even in database arena and Oracle is not behind. They are heavily using it in their data mining techniques and introducing new alogs and other frameworks.




Data mining is a technique that discovers previously unknown relationships in data. Data mining is the practice of automatically searching large stores of data to discover patterns and trends that go beyond simple analysis. Data mining uses sophisticated mathematical algorithms to segment the data and to predict the likelihood of future events based on past events. Data mining is also known as Knowledge Discovery in Data (KDD).

This is especially very much pertinent when it comes to OLAP. On-Line Analytical Processing (OLAP) can be defined as fast analysis of multidimensional data. OLAP and data mining are different but complementary activities. Data mining and OLAP can be integrated in a number of ways. OLAP can be used to analyze data mining results at different levels of granularity. Data Mining can help you construct more interesting and useful cubes.

Data mining does not automatically discover information without guidance. The patterns you find through data mining are very different depending on how you formulate the problem. Each data mining model is produced by a specific algorithm. Some data mining problems can best be solved by using more than one algorithm. This necessitates the development of more than one model. For example, you might first use a feature extraction model to create an optimized set of predictors, then a classification model to make a prediction on the results.

In Oracle Data Mining, scoring is performed by SQL language functions. Understand the different ways involved in SQL function scoring. Oracle Data Mining supports attributes in nested columns. A transactional table can be cast as a nested column and included in a table of single-record case data. Similarly, star schemas can be cast as nested columns. With nested data transformations, Oracle Data Mining can effectively mine data originating from multiple sources and configurations.



Categories: DBA Blogs

CPU Conundrum in Oracle 19c

Fri, 2020-05-29 03:29
If you have managed Oracle databases on any kind of hardware, you know how important the CPU is for the optimal performance of database. Over the years, Oracle has tried hard to come up with efficient strategies to make sure that CPU is utilized efficiently or caged properly.


Configuring the machine to optimally leverage CPU is a big ask. Normally onn top of the Bare Metal or Virtual instances, each instance of an Oracle database is configured to use a number of vCPUs by enabling Oracle Database Resource Manager (DBRM) and setting the CPU_COUNT parameter. If DBRM is not configured, the CPU_COUNT setting simply reflects the total vCPUs on the system. Enabling DBRM allows the CPU_COUNT setting to control the number of vCPUs available to the database. This applies at both the CDB (Container Database) and PDB (Pluggable Database) levels.

The most common approach to managing CPU resources is to NOT over-provision and simply allocate CPU according to what’s available. Whether CPU is allocated to Virtual Machines that each contain a single database, or CPU is allocated to individual databases residing on a single Virtual Machine, the result is the same.

Oracle offers the ability to configure “shares” for each Pluggable Database within a Container Database.  Each instance of a Container Database is given an amount of vCPU to use by enabling DBRM and setting CPU_COUNT. The Pluggable Databases within that Container Database are then given “shares” of the vCPU available to the Container Database. Each Pluggable Database then receives the designated share of CPU resources, and the system is not over-subscribed.

DBRM constantly monitors demand for CPU resources within each Pluggable Database, as well as the overall availability of CPU resources at the Container Database level. DBRM allows each Pluggable Database to automatically and immediately scale up to use more CPU resources if available in the Container Database. The ability to use Dynamic CPU Scaling is a new feature of Oracle Database 19c that allows Pluggable Databases to automatically scale CPU resources up and down in response to user demand.

Categories: DBA Blogs

Converged Oracle

Fri, 2020-05-29 03:25
Its very interesting to note down that where some database providers especially in the cloud are going about diverging the database offering for various use cases, whereas Oracle is talking about converged or unified databases for various on-prem and Cloud use-cases.


Converged databases support Spatial data for location awareness, JSON for document stores, IoT for device integration, in-memory technologies for real-time analytics, and of course, traditional relational data.

Mainly these so-called converged databases are aimed at supporting mixed work loads or keeping your data at one location for disparate applications. You won't have to worry abotu managing and more importantly integrating different systems. You will synergize everything into one.

Now that looks great in theory but we have seen adverse impacts when you try to combine things such as OLTP and DWH or graph databases with spatial and so on. The marriage of these different uses cases might become a performance nightmare if not handled rightly.

Oracle Database at core is a good manifestation of a converged database, as it provides support for Machine Learning, Blockchain, Graph, Spatial, JSON, REST, Events, Editions, and IoT Streaming as part of the core database
Categories: DBA Blogs

SSIS in AWS RDS

Fri, 2020-05-22 03:38
Whenever migrating a SQL Server database from on-prem to AWS Cloud, my first preference is always to move it to AWS RDS, the managed database service. So whenever a client asks me to migrate an on-prem SQL Server database, my first question is:


Do you need to access filesystem as part of this database operations?

(Secretly wishing the answer would be NO), but more often than not, SSIS is the deal breaker in such database migration and the database ends up on an EC2 instance, which is still better than having it on-prem.

Managing a SQL Server on EC2 seems like a heavy chore when your other SQL Servers are humming smoothly on RDS and you know you don't have to nurse and babysit them. Well the prayers have been answered and the days of looking at those EC2 based SQL Servers having SSIS are numbered

AWS has announced SSIS support on RDS. For now, its only compatible with either SQL Server 2016 and 2017, which is a bit of a bummer, but still a welcome thing. SSIS is enabled through option groups in RDS and you have to do the S3 integration which is fairly straight forward. You can find step by step instructions here.

Looking forward to migrate my SSIS-struck EC2 based SQL Servers to RDS now.


Categories: DBA Blogs

Cloud Vanity: A Weekly Carnival of AWS, GCP and Azure - Edition 1

Thu, 2020-05-07 18:46
This is the first edition of this weekly collection about what is happening in the rapidly evolving cloud sphere. This will mainly focus on news, blogs, articles, tidbits, and views from AWS, Azure and GCP but will also include other Cloud providers from time to time. Enjoy Reading!!!




AWS:

Amazon Relational Database Service (RDS) for SQL Server now supports distributed transactions using Microsoft Distributed Transaction Coordinator (MSDTC). With MSDTC, you can run distributed transactions involving RDS for SQL Server DB instances.

Despite the Kubernetes and Serverless hypes, the vast majority of cloud workloads still happen on virtual machines. AWS offers the Amazon Elastic Compute Cloud (EC2) service, where you can launch virtual machines (AWS calls them instances).

James Beswick shows how you can import large amounts of data to DynamoDB using a serverless approach.

Amazon Lightsail provides an easy way to get started with AWS for many customers. The service balances ease of use, security, and flexibility. The Lightsail firewall now offers additional features to help customers secure their Lightsail instances.

AWS Security Hub offers a new security standard, AWS Foundational Security Best Practices This week AWS Security Hub launched a new security standard called AWS Foundational Security Best Practices.

GCP:

As organizations look to modernize their Windows Server applications to achieve improved scalability and smoother operations, migrating them into Windows containers has become a leading solution. And orchestrating these containers with Kubernetes has become the industry norm, just as it has with Linux.

“Keep calm and carry on.” While the words may resonate with the public, carrying on with business as usual these days is not an option for most enterprises—especially not application development and delivery teams.

During times of challenge and uncertainty, businesses across the world must think creatively and do more with less in order to maintain reliable and effective systems for customers in need.

COVID-19 is forcing us all to adapt to new realities. This is especially true for the healthcare industry. From large healthcare providers to pharmaceutical companies to small, privately run practices, nearly every customer in the healthcare industry is re-evaluating and shifting their strategies.

Protecting users and data is a big job for organizations, especially as attackers continue to attempt to access enterprise credentials and gain control of corporate machines. Google has been working hard to help protect corporate passwords with features like Password Checkup and a variety of other Chrome functionalities.

Azure:

Modern applications are increasingly built using containers, which are microservices packaged with their dependencies and configurations. For this reason, many companies are either containerizing their existing applications or creating new complex applications that are composed of multiple containers.

In the past few months, there has been a dramatic and rapid shift in the speed at which organizations of all sizes have enabled remote work amidst the global health crisis. Companies examining priorities and shifting resources with agility can help their employees stay connected from new locations and devices, allowing for business continuity essential to productivity.

Whether you're a new student, thriving startup, or the largest enterprise, you have financial constraints and you need to know what you're spending, where, and how to plan for the future. Nobody wants a surprise when it comes to the bill, and this is where Azure Cost Management + Billing comes in.

Azure Container Registry announces dedicated data endpoints, enabling tightly scoped client firewall rules to specific registries, minimizing data exfiltration concerns.

Azure Backup uses Recovery Services vault to hold customers' backup data which offers both local and geographic redundancy. To ensure high availability of backed up data, Azure Backup defaults storage settings to geo-redundancy.
Categories: DBA Blogs

Kubernetes Taints/Tolerations and Node Affinity for Dummies

Thu, 2020-04-09 21:26

In order to guarantee which pod goes to which node in a Kubernetes cluster, the concept of Taints/Tolerations and Node Affinity is used. With Taints/Tolerations, we taint a node with a specific label, and then add those labels as toleration in the pod manifest to ensure that if a pod doesn't have that toleration, it won't be scheduled on that tainted node. In order to ensure that, this tolerated pod only goes to tainted node, we also add an affinity within the pod manifest.


So in other words, Taints/Tolerations are used to repel undesired pods, whereas Node Affinity is used to guide Kubernetes scheduler to use a node for a specific pod.

So why we need both Taints/Tolerations and Node Affinity? It is to guarantee that a pod goes to an intended node. Because Taints/Tolerations ensures that undesired pod stay away from a node but it doesn't ensure that desired pod will actually be placed on that node. In order to guarantee that, we use node affinity.

Following is a complete example of 4 deployments: red, blue, green, other. We have 4 worker nodes node01, node02, node03, node04.

We have labelled nodes to their respective colors, and we also have added a taint with same key value pair. Then we added a toleration in deployments for the respective key value pair. For example, this ensures that For node01 , the label is red, taint is also red and any pod which doesn't have label red won't be scheduled on this node. Then we added a node affinity which ensures that red pods will only be placed on node with label red. Same logic is being used for other deployments.


For Node red:

Kubectl label node node01 color=red
Kubectl taint node node01 color=red:NoSchedule

For Deployment red:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: red
spec:
  replicas: 1
  selector:
    matchLabels:
      color: red
  template:
    metadata:
      labels:
        color: red
    spec:
      containers:
      - name: nginx
        image: nginx
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
            - matchExpressions:
              - key: color
                operator: In
                values:
                - red
      tolerations:
  - key: "color"
    operator: "Equal"
    value: "red"
    effect: "NoSchedule"


For Node blue:

Kubectl label node node02 color=blue
Kubectl taint node node02 color=blue:NoSchedule

For Deployment blue:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: blue
spec:
  replicas: 1
  selector:
    matchLabels:
      color: blue
  template:
    metadata:
      labels:
        color: blue
    spec:
      containers:
      - name: nginx
        image: nginx
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
            - matchExpressions:
              - key: color
                operator: In
                values:
                - blue
      tolerations:
  - key: "color"
    operator: "Equal"
    value: "blue"
    effect: "NoSchedule"

For Node green:

Kubectl label node node03 color=green
Kubectl taint node node03 color=green:NoSchedule

For Deployment green:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: green
spec:
  replicas: 1
  selector:
    matchLabels:
      color: green
  template:
    metadata:
      labels:
        color: green
    spec:
      containers:
      - name: nginx
        image: nginx
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
            - matchExpressions:
              - key: color
                operator: In
                values:
                - green
      tolerations:
  - key: "color"
    operator: "Equal"
    value: "green"
    effect: "NoSchedule"


For Node Other:

Kubectl label node node04 color=other

For Deployment other:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: other
spec:
  replicas: 2
  selector:
    matchLabels:
      color: other
  template:
    metadata:
      labels:
        color: other
    spec:
      containers:
      - name: nginx
        image: nginx
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
            - matchExpressions:
              - key: color
                operator: In
                values:
                - other
      tolerations:
  - key: "color"
    operator: "Equal"
    value: "other"
    effect: "NoSchedule"

Hope this helps!!!
Categories: DBA Blogs

BigQuery Materialized Views and Oracle Materialized Views

Wed, 2020-04-08 18:33
One of the common ways of one-to-many replication setups in Oracle databases involve, on high level, having one master transaction database which holds the transactions, then a mview log is created on that table.


Then all the other reporting databases subscribe their respective materialized views (MViews) to this log table. These MViews remain in sync with the master log table through incremental refresh or through complete refresh. As long as it runs fine, it runs fine but when things break, it becomes ugly, and I mean ugly. The MViews at reporting databases could lag behind the master log due to network issue or if the master database goes down. Doing a complete refresh is also a nightmare and you have to do lots of purging and tinkering. The more subscribing MViews, the more hassle it is when things break.

BigQuery is Google's managed data warehousing service which now offers materialized views. If you have managed Oracle MViews, it brings you to tears when you learn that BigQuery MViews offers following:

Zero maintenance: A materialized view is recomputed in background once the base table has changed. All incremental data changes from the base tables are automatically added to the materialized views. No user inputs are required.

Always fresh: A materialized view is always consistent with the base table, including BigQuery streaming tables. If a base table is modified via update, merge, partition truncation, or partition expiration, BigQuery will invalidate the impacted portions of the materialized view and fully re-read the corresponding portion of the base table. For an unpartitioned materialized view, BigQuery will invalidate the entire materialized view and re-read the entire base table. For a partitioned materialized view, BigQuery will invalidate the affected partitions of the materialized view and re-read the entire corresponding partitions from the base table. Partitions that are append-only are not invalidated and are read in delta mode. In other words, there will never be a situation when querying a materialized view results in stale data.

Smart tuning: If a query or part of a query against the source table can instead be resolved by querying the materialized view, BigQuery will rewrite (reroute) the query to use the materialized view for better performance and/or efficiency.

In my initial testing, the things work like a charm and refresh takes at most couple of minutes. I will be posting some tests here very soon. But suffice is to say that delegating manaagement of Mview refresh to Google is reason enough to move to BigQuery.


Categories: DBA Blogs

Pages