Two are better than one if they act as one.
Two pointer algorithm is one of the most commonly asked questions in any programming interview. This approach optimizes the runtime by utilizing some order (not necessarily sorting) of the data. It is generally applied on lists (arrays) and linked lists. This is generally used to search pairs in a sorted array. This approach works in constant space.
In this technique pointers represent either index or an iteration attribute like node’s next.
Sorting means putting items in a particular order. That particular order is determined by the comparison property of the elements. In case of integers we say smaller number comes first and bigger number appear later. Arranging items in particular order improves the searching of the element. Hence sorting has heavy usage in computer science.
In this blog we will go through common sorting algorithms. We will see implementation of each in python. To compare their runtime I used Leetcode question on sorting array. The constraint for the question is as follows.
Constraints:
1 <= nums.length <= 50000 -50000 <= nums[i]…
In this blog we will go through two most common non-comparison based sorting algorithms. But before we do that, why do we need non-comparison sorting?
Comparison-based sorting algorithm have lower bound of O(nlogn)
operations to sort n
elements. This comes from the fact that a sorted array is one of the n! permutations that we can arrange the n
numbers. Every time we do a comparison we actually reduce the number of permutation for arranging n
numbers to half. To find the actual arrangement we need to do at least log2N!
. To improve upon this lower bound we use…
In my last blog I discussed about BFS & DFS. In this blog we will see these two in action by solving a Leetcode problem.
Given an m x n
2D binary grid grid
which represents a map of '1'
s (land) and '0'
s (water), return the number of islands.
An island is surrounded by water and is formed by connecting adjacent lands horizontally or vertically. You may assume all four edges of the grid are all surrounded by water.
We traverse the grid, whenever we see a 1 we increase our result by 1. Using that co-ordinate or row-col value…
Depth First Search and Breadth First Search are two very common tree/graph traversal/searching algorithm. In this blog we will go through these two algorithm’s implementation. For simplicity we will implement them for binary search tree. Logic can be extended to other graphs using adjacency matrix.
Number Of Island: Sol with BFS & DFS
Image below is the tree that we will traverse.
AWS EKS provides managed Kubernetes service. This service speeds up the deployment, management & scaling of the infrastructure required for a production-grade K8 cluster.
Last week we deployed an Astronomer cluster using AWS EKS. It was a dev environment so we selected subnets from existing dev VPC. It started well, but as we created multiple deployments on astronomer after 4–5 deployments we started getting errors. Other developers using those subnets also started complaining that they are seeing errors. In the meanwhile I opened CloudTrail to see error event, and I found below:
"eventSource": "ec2.amazonaws.com",
"eventName": "CreateNetworkInterface",
"userAgent": "aws-sdk-go/1.33.14 …
In the blog post, we will see some best practices for authoring DAGs. Let’s start.
DAG as configuration file
Airflow scheduler scans and compile DAG files at each heartbeat. If DAG files are heavy and lot of top level codes are present in it, scheduler will consume lot of resources and time to process them at each heartbeat. So it is advised to keep the DAGs light, more like a configuration file. As an step forward it will be a good choice to have YAML/JSON based definition of workflow and then generating the DAG based off that. This has double…
Layers are logical collection of Nodes/Neurons. At the highest level, there are three types of layers in every ANN:
Regularization is a principle which penalizes complex models so that they can generalizes better. It prevents overfitting. In this blog we will visit common regularization techniques.
Your neural network is only as good as the data you feed it.
The performance of deep learning neural networks often improves with the amount of data available. But we don’t usually have huge amount of data. Data augmentation is a technique to artificially create new training data from existing training data. Depending upon when we apply these transformations we have two types of augmentation:
Online — perform all the necessary transformations beforehand
Offline…
Cloud | ML | Big Data