Trending December 2023 # Columbia Engineering Exec Ed Construction Management # Suggested January 2024 # Top 15 Popular

You are reading the article Columbia Engineering Exec Ed Construction Management updated in December 2023 on the website We hope that the information we have shared is helpful to you. If you find the content interesting and meaningful, please share it with your friends and continue to follow and support us for the latest updates. Suggested January 2024 Columbia Engineering Exec Ed Construction Management

Why enroll for the Postgraduate Diploma in Construction Management (E-Learning)?

The average Teacher-Student ratio in the program is 1:300.

Who is this Diploma for?

Here are roles well suited to this certification diploma:

Civil Engineer

Project Planner

Construction Engineer


Construction Manager

Senior Project Manager

Lead Engineer

Senior Site Engineer

Operations Manager

Industrial Engineer

Project Administrator

Mechanical Engineer

Project Developer


Project Engineer

Real Estate Developers and Investors


was spent worldwide on construction in 2023



of megaprojects around the globe are 40% or more behind schedule










Your Learning Journey

463 Video Lectures

32 Assignments

24 Discussions

Live Webinars

1 Real-World Examples

1 Capstone Project

Weekly Q&A Sessions

Module 1: Construction Project Management

Week 1: Construction Industry Overview

Case Study: 2nd Avenue Subway Project

Week 2: Fundamentals of the Project Development Cycle

Week 3: Surety Bonds and Introduction to Lean Theory

Week 4: Sustainable Development and Safety Practices

Week 5: Technology Trends in the Construction Industry

Week 6: International View of Construction Projects

Week 7: Project Manager and Project Logistics

Week 8: Project Planning

Module 2: Construction Scheduling

Week 10: Construction Activities

Week 11: Critical Path Method

Week 12: Activity on Arrow

Break Weeks: 13 and 14

Week 15: PERT and Range Estimating

Week 16: Role of Scheduler and Line of Balance

Week 17: Trends and Technologies in Scheduling

Week 18: Risk Allocation in Scheduling

Week 19: Scheduling for Large Programs and Lean Design

Module 3: Construction Cost Estimating and Cost Control

Week 21: Construction Cost Estimating, Cost Control, and Quantity Takeoff

Week 22: Quantity Takeoff and Measurements

Week 23: Building the Estimate and Procurement

Week 24: Post Contract, Cost Estimation, and Program Cost

Week 25: Earned Value Method and Cost Estimation in Practice

Week 26: Project Cash Flow

Week 27: Program Cost Estimating and Technology Trends

Week 28: Lean in Cost Control and Estimating

Module 4: Construction Finance

Week 30: The Mathematics of Money – Part 1

Week 31: The Mathematics of Money – Part 2

Week 32: Real Estate Finance for Development Projects – Comps, Cap Rate, Discounted Cash Flow (DCF)

Week 33: Real Estate Finance for Development Projects – Net Present Value (NPV), Internal Rate of Return (IRR)

Week 34: Financial Plans for Development Projects

Week 35: Financial Plans: Decision Tree Analysis for Real Estate Projects

Week 36: Project Finance and Risk in Project Finance

Week 37: Risk in Project Finance, Lean in Construction Finance

Capstone Project

Key Takeaways

Describe the life cycle phases of a construction project, outline roles and responsibilities as well as identify and mitigate risks

State the best practices pertaining to sustainability as well as the safety of workers at construction sites

Use various theories, aids, and software to accurately and effectively estimate and control the schedule of even large scale construction projects

Calculate quantity takeoff and measurements and pricing of equipment, labor, and materials to create a successful construction estimate

Define the key concepts in procurement and post contract activities

Use various calculations and other assessment tools to determine the profitability and risk factors of construction projects

Industry Examples

Second Avenue Subway

The Second Avenue Subway was a project that started on the east side of Manhattan. It was one of the very few improvements to the New York City subway system in the last tens of years.

601 Lexington Avenue

It currently stands today in Midtown Manhattan. It is a proud 59 stories tall. The structural engineer on the project was William LeMessurier.

Limbach’s Implementation of BIM Using LMPS (802)

5D estimating software has multiple different avenues for input. You can take the data directly out of the 5D estimating software and put it in the system of your choice to utilize both operationally and cost-wise.

Apex Feasibility Study report

It is a quick/dirty check to see if project works on no-time-value, fully-financed-by-equity basis. It is usually first step in project design. Financed-by-equity assumption is conservative. It enables quick rejection (NoGo) of hopeless projects.

Accounting company for small businesses contributes to the diploma content related to the Estimate to Completion or EAC lifecycle.


Architectural, engineering, planning, environmental, and construction management provider contributes to the diploma content related to defining the role of a construction manager, and BIM.


Architecture, engineering, and construction firm contributes to the diploma insights into construction bonds.

McKinsey & Co.

Public and private sector consulting firm contributes to the diploma content related to capital projects and infrastructure.


Global law firm contributes to the diploma features of construction contracts.


Project development and construction group contributes to the diploma best practices on EHS, as well as sustainability.

Strategic Program Management LLC

Large capital construction company contributes to the diploma content focused on identifying and managing international risks.

Load more

Note: All product and company names are trademarks™ or registered® trademarks of their respective holders. Use of them does not imply any affiliation with or endorsement by them.


Faculty Ibrahim Odeh

Odeh is a Founding Director of the Global Leaders in Construction Management Program. He holds an MBA degree with an emphasis on Finance from Minnesota, USA. He received his Ph.D. in Civil Engineering with a focus on Construction Management from the University of Illinois at Urbana-Champaign.

Industry Leaders

In addition to video lectures by faculty, industry experts from the construction world share their knowledge and experience through periodic guest lectures.

Applications Exercises

Hands-on activities and discussions provide experience and first-hand understanding of the challenges you may face as a construction project manager. Some sample exercises:

Propose three significant sustainability measures for a planned renovation

Create a WBS for an office building

Identify and mitigate two international risks in a project

Create a line of balance diagram for an electrical upgrade project

Choose the project delivery method for a water treatment plant expansion and emergency water main replacement

Complete Integrated Management System

Calculate the equipment cost, labor cost, and material and subcontractor cost for a given project

Determine the project EVM and cash flow

Address new bond requirements for construction projects

Make a GO or NO-GO decision

Compare the two types of construction loans

Past Participant Profiles Countries


Work Experience Industries Certificate Certificate

Upon successful completion of the diploma, participants will receive a verified digital diploma from Emeritus Institute of Management, in collaboration with Columbia Engineering Executive Education

Get Certified

Pre-admission Requirements

Applicants must be at least 21 years of age and will be required to submit:

A completed application form

Minimal educational requirement of a Bachelor Degree certificate or official transcript in any discipline

An updated CV/resume


All candidates who have received their bachelor’s or other degree or diploma from an education institution where English is NOT the primary language of instruction are required to demonstrate English language proficiency through ANY ONE of the following methods

Obtain a TOEFL minimum score of 550 for the paper based test or its equivalent

Obtain an IELTS minimum score of 6.0

Obtain a Pearson Versant Test minimum score of 59

Obtain a Certificate of Completion for a Certificate course offered by Emeritus

Submit a document which shows that the candidate has, for the last 24 months or more, worked in ANY ONE of these countries: Antigua and Barbuda, Australia, The Bahamas, Barbados, Belize, Canada, Dominica, Grenada, Guyana, India, Ireland, Jamaica, New Zealand, Singapore, South Africa, St Kitts and Nevis, St Lucia, St Vincent and the Grenadines, Trinidad and Tobago, United Kingdom, United States of America

© Emeritus Institute of Management

Special Group Enrollment Pricing

Special pricing up to 20% discount is available if you enroll with your colleagues.

Pay in 2 instalments

The first installment of $1,500 is due immediately.

The second installment of $1,500 is to be paid by XX XX, XXXX.

Pay in 3 instalments

The first installment of $1,020 is due immediately.

The second installment of $990 is to be paid by XX XX, XXXX.

The third installment of $990 is to be paid by XX XX, XXXX.

You're reading Columbia Engineering Exec Ed Construction Management

Prompt Engineering: Key Concepts & Use Cases

Prompt engineering is an essential element in the development, training, and usage of large language models (LLMs) and involves the skillful design of input prompts to improve the performance and accuracy of the model.

In this post, we’ll look at why prompt engineering has been so popular recently, and why it will likely become even more necessary as LLM-enabled apps grow.

What is Prompt Engineering?

Prompt engineering is the practise of developing and modifying the input to generative AI models such as ChatGPT, GPT-3, DALL-E, Stable Diffusion, Midjourney, and others. The ultimate purpose of prompt engineering is to improve the performance of the language model by providing well-structured, concise, and tailored input that is relevant to the job or application for which the model is designed.

Prompt engineering frequently involves the careful selection of words and phrases included in the prompt, as well as the overall structure and organization of the input, to achieve this purpose. This systematic approach to prompt engineering is essential because even tiny modifications to the prompt can have a major influence on the outcome.

Effective prompt engineering requires an in-depth understanding of the capabilities and limits of large language models (LLMs), as well as the ability to build engaging input prompts. Furthermore, prompt engineering often involves providing context to the LLM in order for it to generate coherent responses, such as by leveraging external documents or proprietary data or framing the input in a way that helps the model understand the context.

In summary, prompt engineering is an important component of dealing with LLMs, and it requires in-depth knowledge of the underlying technology, a sharp eye for detail, and a talent for creating high-quality input prompts.

Prompt Engineering: Key Terms

LLMs are a type of artificial intelligence that has been trained on a huge amount of text data to create human-like replies to natural language inputs.

LLMs are distinguished by their capacity to produce high-quality, cohesive writing that is frequently indistinguishable from that of a human. This cutting-edge performance is attained by training the LLM on a large corpus of text, often several billion words, allowing it to grasp the intricacies of human language.

Below are several key terms related to prompt engineering and LLMs, starting with the main algorithms used in LLMs:

Word embedding is a basic approach used in LLMs since it is utilized to represent the meaning of words in a numerical manner that can subsequently be analyzed by the AI model.

Attention mechanisms are LLM algorithms that allow the AI to focus on certain elements of the input text, such as sentiment-related phrases, while creating an output.

Transformers a common sort of neural network design in LLM research that processes input data through self-attention techniques.

Fine-tuning is the process of adapting an LLM for a given job or topic by training it on a smaller, relevant dataset.

Prompt engineering is the expert design of input prompts for LLMs to provide high-quality, coherent outputs.

Interpretability is the ability to understand and explain the outputs and decisions of an AI system, which is often a challenge and ongoing area of research for LLMs due to their complexity.

Elements of Prompts

Instructions: The major purpose of the prompt is to offer clear instructions for the language model.

Context: Context gives extra information to assist the LM in producing more relevant output. This information can come from external sources or be given by the user.

Input data: Input data is the user’s inquiry or request for which we desire an answer.

Output indicator: This specifies the format of the answer.

Prompt Engineering: Examples

Let’s take a look at some effective prompt engineering examples from the Awesome ChatGPT Prompts GitHub source.

Python Interpreter

I want you to act like a Python interpreter. I will give you Python code, and you will execute it. Do not provide any explanations. Do not respond with anything except the output of the code. The first code is: “print(‘hello world!’)”

Prompt Generator

I want you to act as a prompt generator. Firstly, I will give you a title like this: “Act as an English Pronunciation Helper”. Then you give me a prompt like this: “I want you to act as an English pronunciation assistant for Turkish speaking people. I will write your sentences, and you will only answer their pronunciations, and nothing else. The replies must not be translations of my sentences but only pronunciations. Pronunciations should use Turkish Latin letters for phonetics. Do not write explanations on replies. My first sentence is “how the weather is in Istanbul? “.” (You should adapt the sample prompt according to the title I gave. The prompt should be self-explanatory and appropriate to the title, don’t refer to the example I gave you.). My first title is “Act as a Code Review Helper” (Give me prompt only)

We can see also find a number of prompt templates in the OpenAI Playground.

Prompt Engineering: Roles

As you can see in these instances, each question includes “role,” which is an important aspect of directing the chatbot, as we saw with the ChatGPT API release. There are numerous roles that must be established:

System: The “system” message controls the assistant’s general behavior. “You are ChatGPT, a large language model trained by OpenAI,” for example. Answer in as few words as possible. Knowledge cutoff: {knowledge_cutoff} Current date: {current_date}“

User: These messages give precise instructions to the helper. This will mostly be utilized by application end users, but it can also be hard coded by developers for certain use scenarios.

Assistant: The assistant message saves past ChatGPT answers or may be supplied by developers to offer examples of desired behavior.

Here’s an example of what a ChatGPT API request looks like:

response = openai.ChatCompletion.create( model="gpt-3.5-turbo", messages=[ {"role": "system", "content": "You are a helpful assistant."}, {"role": "user", "content": "Who won the world series in 2023?"}, {"role": "assistant", "content": "The Los Angeles Dodgers won the World Series in 2023."}, {"role": "user", "content": "Where was it played?"} ] ) Prompt Engineering: Parameters

Aside from carefully constructing the written part of a prompt, there are several prompt engineering parameters to consider when working with LLMs. For example, let’s look at the API parameters available for GPT-3 Completions in the OpenAI Playground:

Model: The model to be used for the text completion i.e., text-davinci-003

Temperature: Lower temperatures provide more predictable and repeated reactions.

Maximum length: The maximum number of tokens to generate, varies by model but ChatGPT allows for 4000 tokens (approx. 3000 words) shared between the prompt and completion (1 token = ~4 characters).

Stop sequences: Up to four sequences in which the API stops returning replies.

Top P: Refers to the probability distribution of the most likely choices for a given decision or prediction i.e., 0.5 means half of all likelihood weighted options are considered.

Frequency penalty: Used to prevent the model from repeating the same word or parses too often. Frequency penalty is particularly useful for generating long-form text when you want to avoid repetition.

Presence penalty: This increases the chance that the model will discuss new subjects, i.e., how much to penalize new tokens depending on whether they have previously been in the text.

Best of: This is used on the server to produce numerous completions and only show the best results. Streaming completions are only available when set to 1.

To summarize, each use case of prompt engineering will have its own set of optimal parameters to get the desired outcomes, thus it’s important learning about and trying different parameter settings to optimize performance.

You can also read ChatGPT Plugins: Concepts and Use Cases.

Prompt Engineering: Use Cases

Now that we’ve covered the fundamentals, here are some of the most typical prompt engineering tasks:

Text summarization: It can be used to extract essential points from an article or document.

Answering questions: This is useful when interacting with external documents or databases.

Text Classification: Helpful for applications such as sentiment analysis, entity extraction, and so on.

Role-playing: Involves generating text that simulates a conversion for specific use cases and character types (tutors, therapists, analysts, etc.)

Code generation: The most notable of which is GitHub Copilot

Reasoning: Good for creating writing that demonstrate logical or problem-solving abilities, such as decision making.

Mr. Pavan’s Data Engineering Journey Drives Business Success


We had an amazing opportunity to learn from Mr. Pavan. He is an experienced data engineer with a passion for problem-solving and a drive for continuous growth. Throughout the conversation, Mr. Pavan shares his journey, inspirations, challenges, and accomplishments. Thus, providing valuable insights into the field of data engineering.

As we explore Mr. Pavan’s achievements, we discover his pride in developing reusable components, creating streamlined data pipelines, and winning a global hackathon. His passion for helping clients grow their businesses through data engineering shines through as he shares the impact of his work on their success. So, let’s delve into the world of data engineering and learn from the experiences and wisdom of Mr. Pavan.

Let’s Get Started with the Interview! AV: Please introduce yourself and shed some light on your background.

Mr. Pavan: I started my academic journey as an Information Technology student at graduation. The promising job opportunities in the field primarily drive me. However, my entire perspective on programming shifted while participating in an MS hackathon called Yappon! I discovered a profound passion for it. This experience became a turning point in my life, igniting a spark to explore the programming world further.

Since then, I have actively participated in four hackathons, with the exhilarating result of winning three. These experiences have sharpened my technical skills and instilled a relentless desire to automate tasks and find efficient solutions. I thrive on the challenge of streamlining processes and eliminating repetitive tasks through automation.

On a personal level, I consider myself an ambivert, finding a balance between introversion and extroversion. However, I am constantly pushing myself to step out of my comfort zone and embrace new opportunities for growth and development. One of my passions outside of programming is trekking. There is something incredibly captivating about exploring the great outdoors and immersing myself in the beauty of nature.

My journey as a computer science enthusiast began with a pragmatic outlook on job prospects. Still, it transformed into an unwavering passion for programming through my participation in hackathons. With a track record of successful projects and a knack for automation, I am eager to continue expanding my skills and making a positive impact in the field of computer science.

AV: Can you name a few people who have influenced your career, and how have they inspired you?

Mr. Pavan: First, I am grateful to my mother and grandmother. They instilled in me the values encapsulated in the Sanskrit quote, ‘Shatkarma Manushya yatnanam, saptakam daiva chintanam.’ Their belief in the importance of human effort and divine contemplation deeply resonated with me. This philosophy emphasizes the balance between personal endeavor and spiritual reflection and has been a guiding principle throughout my career. Their unwavering support and belief in me have been a constant source of inspiration.

In addition, I am fortunate to have a supportive network of friends. They have played an integral role in my career journey. These friends have helped me understand complex programming concepts and motivated me to participate in hackathons and hone my skills. Their guidance and encouragement have been instrumental in pushing me beyond my limits and extracting the best out of me. I am immensely grateful for their presence in my life and for being an integral part of my progress thus far.

AV: What drew you to work with data? What do you find most exciting about your role as a data engineer?

Mr. Pavan: What drew me to work with data was realizing that data drive everything in today’s world. Data is the foundation upon which decisions are made, strategies are formulated, and innovations are born. I was captivated by the immense power that data holds in shaping the success of any industry or organization. The ability to transform raw data into meaningful insights and leverage those insights to drive positive outcomes for customers and businesses became a driving force behind my passion for working with data.

As a data engineer, what excites me the most is the opportunity to be at the forefront of the data revolution. I am fascinated by the intricate process of designing and implementing data systems that efficiently capture, process, and analyze massive volumes of information. Data’s sheer magnitude and complexity present exhilarating challenges that require creative problem-solving and continuous learning.

Must Have Skills for Data Engineers AV: What are some of the most important technical skills a data engineer should possess? How have you developed these skills over time?

Mr. Pavan: Regarding technical skills, several key proficiencies are essential for a data engineer. Firstly, a strong foundation in SQL is vital, as it is the backbone of data manipulation and querying. Writing efficient and optimized SQL queries is crucial in extracting, transforming, and loading data from various sources.

Proficiency in at least one object-oriented programming language, such as Python, Scala, or Java, is also highly valuable for a data engineer. These languages enable the development of data pipelines, data integration workflows, and the implementation of data processing algorithms. Being adept in programming allows for more flexibility and control in working with large datasets and performing complex transformations.

A solid understanding of data warehousing concepts is important as well. This includes knowledge of data modeling techniques, dimensional modeling, and familiarity with different data warehousing architectures. Data engineering involves designing and building data structures that enable efficient data retrieval and analysis, and a strong grasp of these concepts is essential for success in this field.

Additionally, having a working knowledge of data lake concepts and distributed computing is becoming increasingly important in modern data engineering. Understanding how to store, manage, and process data in a distributed and scalable manner using technologies like Apache Hadoop and Apache Spark is highly beneficial. Distributed computing frameworks like Apache Spark allow for parallel processing of large-scale datasets and enable high-performance data processing and analytics.

In my journey as a data engineer, I have developed these technical skills over time through a combination of academic learning, practical experience, and a continuous drive for improvement. SQL and object-oriented programming languages were integral parts of my academic curriculum.

Problem Solving at its Core! AV: How do you approach problem-solving as a data engineer? What methods have you found most effective?

Mr. Pavan: As a data engineer, problem-solving is at the core of my role. When approaching a problem, I believe that identifying the right problem to solve is crucial. Taking the time to clearly understand the problem statement, its context, and its underlying goals allows me to define the problem accurately and set a clear direction for finding a solution.

I often start by gathering information and conducting research to begin the problem-solving process. I explore relevant documentation, online resources, and community forums to gain insights into existing solutions, best practices, and potential approaches. Learning from the experiences and expertise of others in the field helps me broaden my understanding and consider various perspectives.

Once I have a good grasp of the problem and the available resources, I devise a solution approach. I break down the problem into smaller, manageable tasks or components, which enables me to tackle them more effectively. I prioritize tasks based on their importance, dependencies, and potential impact on the solution.

I maintain a mindset of continuous learning and improvement throughout the problem-solving process. I am open to exploring new technologies, techniques, and methodologies that can enhance my problem-solving capabilities.

Don’t Get Bogged Down by the Challenges AV: What are some of the biggest challenges you face as a data engineer, and how do you overcome them?

Mr. Pavan: As a data engineer, there are several challenges that I have encountered in my role. Here are a few of the biggest challenges and how I have learned to overcome them:

Data Quality and Integrity

Ensuring the quality and integrity of data is crucial for accurate analysis and decision-making. However, working with diverse data sources and integrating data from various systems can lead to inconsistencies, missing values, and other data quality issues. To address this challenge, I employ robust data validation and cleansing techniques. I implement data validation checks, perform data profiling, and leverage data quality tools to identify and resolve anomalies. I also collaborate closely with data stakeholders and domain experts to understand the data and address quality concerns.

Scalability and Performance

Evolving Technology Landscape

Collaboration and Communication

Data engineering often involves collaborating with cross-functional teams, including data scientists, analysts, and stakeholders. Effective communication and collaboration can be challenging, particularly when dealing with complex technical concepts. To address this challenge, I focus on building strong relationships with team members, actively listening to their requirements, and effectively conveying technical information clearly and concisely. Regular meetings and documentation can also facilitate collaboration and ensure everyone is aligned.

AV: Having worked as a Data Engineer for approximately 4 years. What accomplishments are you most proud of, and why?

Mr. Pavan: One of my significant achievements is developing reusable components that can be easily plugged and played using configuration files. This initiative has saved a significant amount of work hours for my team and the organization as a whole. By creating these reusable components, we can now quickly and efficiently implement common data engineering tasks, reducing repetitive work and increasing productivity.

I take pride in developing a data pipeline/framework that has streamlined the process of onboarding new data sources. This framework allows us to integrate new data sources into our existing data infrastructure seamlessly. It has reduced the time required for data source onboarding and ensured data accuracy and consistency throughout the pipeline. The ability to deploy this framework rapidly has been instrumental in accelerating data-driven insights and decision-making within the organization.

Participating in and winning a global hackathon has been a significant achievement in my career. It demonstrated my ability to work under pressure, think creatively, and collaborate effectively with team members. Winning the hackathon showcased my problem-solving skills, technical expertise, and ability to deliver innovative solutions within a constrained timeframe. It validated my capabilities and recognized my hard work and dedication to the project.

I am proud of the contributions I have made to help customers grow their businesses. In additional, helping clients harness the power of data to drive their decision-making processes by focusing on delivering scalable, reliable, reusable, and performance/cost-optimized solutions is also something that I am proud of. By designing and implementing robust data engineering solutions, I have enabled businesses to leverage data effectively, derive actionable insights, and make informed strategic decisions. Witnessing my work’s positive impact on our customers’ success is incredibly rewarding and fuels my passion for data engineering.

Industry Trends

I seek online courses and training programs from reputable platforms like Coursera, edX, and Udacity. These courses cover many topics, including data engineering, cloud computing, distributed systems, and machine learning. By enrolling in these courses, I can learn from experienced instructors, gain hands-on experience with new tools and frameworks, and stay updated on the latest industry practices.

I actively engage in helping aspiring data engineers through an online learning platform. This involvement allows me to interact with individuals seeking to enter the data engineering field. By answering their questions, providing guidance, and sharing my knowledge, I contribute to their learning journey and gain insights into their challenges and concerns. This experience enables me to understand different perspectives, learn about new technologies or approaches they are exploring, and continuously expand my knowledge base.

I actively sought out learning opportunities both within and outside my workplace. This involved attending workshops, webinars, and conferences to stay updated on industry trends and technologies. I also enrolled in online courses to enhance my knowledge and skills in specific areas of interest.

I actively sought projects that stretched my abilities and allowed me to gain new experiences. I expanded my skill set


y volunteering for challenging assignments. Additionally, I also demonstrated my willingness to take the initiative and go beyond my comfort zone. These projects provided valuable learning opportunities and helped me add significant accomplishments to my resume.

Tips for Freshers that are coming in Data Engineering

Having a growth mindset and a willingness to learn continuously is important. Stay curious and seek learning opportunities to expand your knowledge and stay ahead of industry trends. This can include taking online courses, attending webinars, reading industry blogs, and participating in relevant communities or forums.

Familiarize yourself with different data storage systems, data processing frameworks, data integration tools, and cloud computing. This includes technologies like Hadoop, Apache Spark, Apache Kafka, cloud platforms, and database management systems. Understanding the strengths and limitations of each component will help you design robust and efficient data pipelines.

Focus on developing proficiency in languages like Python, Scala, or Java, commonly used in data engineering tasks.

Theory alone is not sufficient in data engineering. Seek opportunities to work on real-world projects or internships where you can apply your knowledge and gain practical experience.

Engage with the data engineering community, join relevant forums or groups, and connect with professionals in the field.


From his initial foray into programming during a hackathon to his successful participation in multiple competitions, Mr. Pavan’s story is one of transformation and unwavering dedication. We hope his dedication, technical skills, and commitment to continuous learning inspire aspiring data professionals.

For those seeking additional career guidance, we recommend reaching out to him on LinkedIn as a means to establish a professional connection. Connecting with him on this platform can provide valuable insights and assistance in navigating your career path effectively.


Understand The Acid And Base In Modern Data Engineering

This article was published as a part of the Data Science Blogathon.


Dear Data Engineers, this article is a very interesting topic. Let me give some flashback; a few years ago, Mr.Someone in the discussion coined the new word how ACID and BASE properties of DATA. Suddenly drop silence in the room. Everyone started staring at each other faces, few of them started saying H2SO4, HCL, HNO3, and H2CO3 are ACID KOH, and NaOH is BASE.

The person who threw the word, he got, stood up and said, Guys! Kindly listen to me. I know you all are A+ in your Engineering Chemistry or Chemical Engineering or Whatever Chemistry you learned during your schools and college. But I am talking about Data Engineering. But the one that I mentioned is key properties of the transaction, specifically from an operational perspective, yes! This is essential for OLTP and OLAP for current digital transformation and applicable for all industries to implement the best Operational systems and build Morden Data Warehouses. He started articulating all the ingredients very detail as follows, Let’s focus on.

What is a Morden Database (DB)?

We know that databases are well-structured and organized collections of data stored on DB servers. The main focus on this is to store, manage and handle that data and processes the same for analytics intention to derive the necessary insights from it and build the various business solutions and make use of it to enhance the business opportunities. These are so-called modern database systems that would be managed specifically on the cloud systems. Those systems have been designed to handle them precisely in multiple cloud environments like Azure, AWS, and GCP.

Why are ACID and BASE Important in This Morden Database World?

No worries, where ACID and BASE play here in this context, both are guiding stars leading organizations to the successful database management approach.

All, good! What is the problem with the existing DB management approach, and where are all these coming into the stage now? There are several reasons, guy! In this current data world, one of the major challenges with data, though, is the generating massive amounts of data that is to be processed in seconds, minutes, hrs and daily basis, I hope you all agree with me. So, we started calling this DATA as BIG DATA. What is the scope of it? Certainly, I can’t say it in one word or one line because there are many more.

What Are the Benefits of ACID and BASE?

To get the most benefits from this, first, we have to enhance the capabilities and standards of the data during each action on it, it would be while inserting, updating, selecting and analyzing, and implementing DATA products with GOLDEN datasets, So the best technique in this data or data warehouse domain to create steering the convolutions of data management is through the use of various database sources.

To achieve this, ACID and BASE are a set of guiding standards that are used to guarantee that database transactions are processed consistently.

My quick glimpse for standards is that whenever the changes are made within a database that needs to be performed by nursing them and ensuring the data within doesn’t become tainted. By the way, we are applying the ACID properties to each transaction -modification of a rows in table/database is the best way to maintain the truth and consistency of a database.

Data integrity.

Simplified operational functions

Reliable and durable storage.

What is ACID?

ACID refers to the four major properties that define Atomicity, Consistency, Isolation, and Durability.

ACID transaction If your database operations have all these ACID properties, we can call an ACID transaction, and the data storage that applies this set of operations is called an ACID transaction system.

This guaranteed data integrity regardless of System and Power failures and errors or other issues with respect to data and its transaction activities, such as creating a new record(s) or updating data the row(s).

In simple terms, ACID provides guideline principles that safeguard database transactions that are processed the same consistently.

Let’s focus on each property in detail now.

Atomicity: in just one word, I could say “Completed” or “Not at All” with respect to my transaction. Further simplified “DONE” or “Don’t Disturb.” Still confused, yes, I can understand. During the database transaction(s) we have to ensure that your commit statement make finishes the entire transaction operation successfully. If any other cases like DB connection issues, internet outages, power outages, data constraints missing, or quality of data interludes in the middle of an operation(s), the database should roll back into its prior state of safe position and hold the right data by commit statement being initiated.

By using atomicity, we can ensure that either the entire transaction was completed or that nothing of it was done.

Consistency: As we know, always the expectation from anything is consistency, regardless of the database as well; it means maintaining data integrity constraints across the journey to sustain the quality and performance perspective. This process stage will be abandoned, and changes will be rolled back to their previous state to retain consistency.


Each transaction is performed in serializable mode and in distinct order without impacting any other transactions happening in parallel. In focused ways, multiple transactions can occur parallelly, and each transaction has no possibility of impacting each other transactions occurring at the same time. We could accomplish between two ends, which would be optimistic and pessimistic transactions scope.

• An optimistic transaction will ensure no duplicate reading or writing in the same place twice or more. This approach transactions will be terminated in the case of duplicate transactions.


As we know that, durability ensures stability and sustainability; in the same fashion, even in any system failure, as we discussed earlier in the database(s) that the changes are successfully committed and will survive constantly and make sure that the data is NOT corrupted at any cost.

How are ACID Transactions Implemented?


Identify the Location of the record that needs to be updated from the Table/DB server.

Get ready with buffer memory for transferring the block disk into the memory space.

Start your updates in that memory space.

Start pushing the modified block back out to disk.

Lock the transaction(s) until a transaction completes or fails.

Make sure the transactions are stored in the transaction logs table/files

Data is saved in a separate repository, then callout as ACID, implemented in the actual database.

If any case of system failure in the mid-transaction, the transaction should either roll back or continue from where the transaction log left off.

All done in the best ways! The ACID is in place.

He has highlighted how in chemistry, a BASE is the opposite of ACID; even in Database concepts as well we have a similar relationship again; the BASE concepts used to provide numerous benefits over ACID, and the prior one is more focused on Data Availability of database systems, and BASE relates to ACID indirectly.

We could derive the words behind BA S E.

• Basically Available – Availability is the key factor in the current digital world; in the BASE context, databases will guarantee the availability of required data by replicating it across the different geography and rather than enforcing immediate consistency on the database, in the cloud (Azure) technology this is mandatory action item while implementing any data components and this comes along with simple and powerful configuration process.

• Soft State – Do not check any business rules; stay written consistently.

• Eventually Consistent – In the BASE context, there won’t be a guarantee of enforcement and consistency, but this makes simplicity in the database make sure that it always gets the last refreshed data.

In this Morden database engineering culture, there many options to bring BASE implies Databases than ACID specific, here the few examples are NoSQL databases, these types are more be inclined toward BASE principles, my favorites are MongoDB, Cosmos DB, and Cassandra, but some NoSQL databases are also related and apply partially to ACID rules, which is required for functions facets. Which can be useful for the Data Warehouses and Data Lake in the staging layer.

Mr. Someone has completed his big round of journey on ACID and BASE. Finally, the folks in the meeting room asked whether we have Ph values in the Database and any specific factors to improve and neutralize those. He replied Yes! We will discuss this in the next meeting and close the meeting.


Guys! I hope you understood, and I believe below are the takeaway from this article.

What are a Morden Database (DB) and its features?

What and ACID and BASE and why are both important in this Morden Database world to survey

Advantageous over the implementation of ACID in Database

A very detailed study about ACID and how to implement the same with simple steps

How BASE is more flexible than the ACID and available database in the market.

ACID transaction’s pitfalls

Since we’re using the locking mechanism, ACID transactions tend to be sluggish with the Read and Write operations. So high-volume applications might hit the performance.

So the choice is yours, based on strong consistency or availability, slower with ACID-compliant DBs or No ACID-compliant but faster.

Remember, Data consistency, Data Quality, and availability aspects are major interesting for decision-making and prediction.

Thanks a lot for your time, and I will get back with another interesting topic shortly! Till then, bye! – Shantha

The media shown in this article is not owned by Analytics Vidhya and is used at the Author’s discretion


Data Engineering For Beginners – Partitioning Vs Bucketing In Apache Hive


Understand the meaning of partitioning and bucketing in the Hive in detail.

We will see, how to create partitions and buckets in the Hive


You might have seen an encyclopedia in your school or college library. It is a set of books that will give you information about almost anything. Do you know what is the best thing about the encyclopedia?

Yes, you guessed it correctly. The words are arranged alphabetically.  For example, you have a word in mind  “Pyramids”. You will directly go and pick up the book with the title “P”. You don’t have to search that in other books. Can you imagine how tough would the task be to search for a single book if they were stored without any order?

Here storing the words alphabetically represents indexing, but using a different location for the words that start from the same character is known as bucketing.

Similar kinds of storage techniques like partitioning and bucketing are there in Apache Hive so that we can get faster results for the search queries. In this article, we will see what is partitioning and bucketing, and when to use which one?

Table of Contents

What is Partitioning?

When to use Partitioning?

What is Bucketing?

When to use Bucketing?

What is Partitioning?

Apache Hive allows us to organize the table into multiple partitions where we can group the same kind of data together. It is used for distributing the load horizontally. Let’s understand it with an example:

Suppose we have to create a table in the hive which contains the product details for a fashion e-commerce company. It has the following columns:

Now, the first filter that most of the customer uses is Gender then they select categories like Shirt, its size, and color. Let’s see how to create the partitions for this example.

CREATE TABLE products ( product_id string, brand string, size string, discount float, price float ) PARTITIONED BY (gender string, category string, color string);

Now, the hive will store the data in the directory structure like:


Partitioning the data gives us performance benefits and also helps us in organizing the data. Now, let’s see when to use the partitioning in the hive.

When to use Partitioning?

When the column with a high search query has low cardinality. For example, if you create a partition by the country name then a maximum of 195 partitions will be made and these number of directories are manageable by the hive.

On the other hand, do not create partitions on the columns with very high cardinality. For example- product IDs, timestamp, and price because will create millions of directories which will be impossible for the hive to manage.

It is effective when the data volume in each partition is not very high. For example, if you have the airline data and you want to calculate the total number of flights in a day. In that case, the result will take more time to calculate over the partition “Dubai” as it has one of the busiest airports in the world whereas for the country like “Albania” will return results quicker.

What is Bucketing?

In the above example, we know that we cannot create a partition over the column price because its data type is float and there is an infinite number of unique prices are possible.

Hive will have to generate a separate directory for each of the unique prices and it would be very difficult for the hive to manage these. Instead of this, we can manually define the number of buckets we want for such columns.

In bucketing, the partitions can be subdivided into buckets based on the hash function of a column. It gives extra structure to the data which can be used for more efficient queries.

CREATE TABLE products ( product_id string, brand string, size string, discount float, price float ) PARTITIONED BY (gender string, category string, color string) CLUSTERED BY (price) INTO 50 BUCKETS;

Now, only 50 buckets will be created no matter how many unique values are there in the price column. For example, in the first bucket, all the products with a price [ 0 – 500 ] will go, and in the next bucket products with a price [ 500 – 200 ] and so on.

When to use Bucketing?

We cannot do partitioning on a column with very high cardinality. Too many partitions will result in multiple Hadoop files which will increase the load on the same node as it has to carry the metadata of each of the partitions.

If some map-side joins are involved in your queries, then bucketed tables are a good option. Map side join is a process where two tables are joins using the map function only without any reduced function. I would recommed you to go through this article for more understanding about map-side joins: Map Side Joins in Hive

End Notes

In this article, we have seen what is partitioning and bucketing, how to create them, and are pros and cons of them.

I would highly recommend you go through the following resources to learn more about Apache Hive:


Process Memory Management In Linux

Process memory management is a crucial aspect of any operating system. In Linux, memory management system is designed to efficiently manage memory usage, allowing processes to access and use memory they require while preventing them from accessing memory they do not own. In this article, we will discuss process memory management in Linux in detail, covering various aspects such as memory allocation, virtual memory, memory mapping, and more.

Memory Allocation

Memory allocation is process of assigning memory to a process or program. In Linux, kernel provides two main methods for memory allocation: static and dynamic.

Static Memory Allocation

Static memory allocation is done at compile-time, where memory allocation for a program is fixed and cannot be changed during runtime. memory is allocated in program’s data section or stack segment. data section contains global variables and static variables, while stack segment contains local variables.

Dynamic Memory Allocation

Dynamic memory allocation is done during runtime, where memory allocation for a program can be dynamically adjusted based on program’s requirements. kernel provides various system calls such as malloc(), calloc(), and realloc() to dynamically allocate memory. These functions allocate memory from heap segment of program’s address space.

Virtual Memory

Virtual memory is a memory management technique that allows a program to use more memory than is physically available in system. In Linux, virtual memory is implemented using a combination of hardware and software. hardware component is Memory Management Unit (MMU), which is responsible for translating virtual memory addresses to physical memory addresses. software component is kernel’s Virtual Memory Manager (VMM), which manages allocation and deallocation of virtual memory.

Memory Mapping

Memory mapping is a technique that allows a process to access a file’s contents as if it were part of process’s memory. In Linux, memory mapping is implemented using mmap() system call. mmap() system call maps a file into a process’s virtual memory address space, allowing process to read and write to file’s contents as if it were part of its own memory. Memory mapping is commonly used in applications such as databases and multimedia players, where large files need to be accessed efficiently.

Shared Memory

Shared memory is a technique that allows multiple processes to access same portion of memory. In Linux, shared memory is implemented using shmget(), shmat(), and shmdt() system calls. shmget() system call creates a shared memory segment, shmat() attaches shared memory segment to a process’s address space, and shmdt() detaches shared memory segment from process’s address space. Shared memory is commonly used in inter-process communication, where multiple processes need to share data efficiently.


Swapping is a technique that allows kernel to move pages of memory from RAM to a swap space on disk when system’s memory is low. In Linux, swapping is implemented using a combination of hardware and software. hardware component is disk, which is used as swap space. software component is kernel’s Swapping Manager, which manages swapping process. When system’s memory is low, Swapping Manager selects pages of memory to swap out to disk, freeing up memory for other processes.

Some additional concepts to consider include −

Kernel Memory Management

The Linux kernel itself also requires memory management, and it uses a separate set of memory management techniques to manage kernel memory. Kernel memory is used to store data structures and code required by kernel to operate. kernel uses techniques like memory mapping, page caching, and memory allocation to manage kernel memory.

Memory Protection

Memory protection is another critical aspect of memory management in Linux. Memory protection techniques prevent processes from accessing memory they are not authorized to access. MMU implements memory protection by using page tables, which map virtual memory addresses to physical memory addresses and track permissions for each memory page.

Memory Fragmentation

Memory fragmentation occurs when available memory is divided into small, non-contiguous chunks, making it difficult to allocate larger blocks of memory. Memory fragmentation can lead to performance issues and even crashes if system runs out of memory. Linux kernel uses several techniques to manage memory fragmentation, including memory compaction and defragmentation.

Memory Leak Detection

As mentioned earlier, failing to release dynamically allocated memory can result in memory leaks, where memory is not returned to system and can eventually cause program to crash due to insufficient memory. Detecting and fixing memory leaks is crucial for maintaining system stability and performance. Linux provides several tools for detecting memory leaks, including valgrind, which can detect memory leaks and other memory-related issues.


In conclusion, process memory management is a crucial aspect of any operating system, and Linux is no exception. Linux kernel provides a robust and efficient memory management system, allowing processes to access and use memory they require while preventing them from accessing memory they do not own. In this article, we discussed various aspects of process memory management in Linux, including memory allocation, virtual memory, memory mapping, shared memory, and swapping. Understanding these concepts is essential for any Linux developer or administrator to efficiently manage memory usage in their systems.

Update the detailed information about Columbia Engineering Exec Ed Construction Management on the website. We hope the article's content will meet your needs, and we will regularly update the information to provide you with the fastest and most accurate information. Have a great day!