You are reading the article Ioen: Solving The Energy Crisis With Interconnected Devices updated in November 2023 on the website Cancandonuts.com. We hope that the information we have shared is helpful to you. If you find the content interesting and meaningful, please share it with your friends and continue to follow and support us for the latest updates. Suggested December 2023 Ioen: Solving The Energy Crisis With Interconnected Devices
The ongoing popularity of cryptocurrency, and blockchain technology by extension, is well deserved. Blockchain has allowed for innovation in many areas of technology and with the amount of backing it has gained, it shows no signs of slowing down anytime soon. Despite its many benefits though, it comes with quite a few drawbacks that many people tend to overlook.
Most prominent among these is the issue of fraud, the decentralization and absence of regulatory bodies around it mean that there is very little recompense when funds are illegally taken.
Separate from fraud, however, one issue that flies under the radar consistently is the effects of crypto mining on society. With some blockchains using up more energy than entire countries, this is becoming an increasingly significant problem. That is not to say that cryptocurrency is abhorrent or unneeded, contrary to that, blockchain represents a future that could be very bright provided the current problems are settled.
Climate change is here and an ever-increasing problem for society. The earth is warming up, sea levels are rising, and extreme weather is impacting our electricity grids. Many new companies have risen over the past decade to combat this increasing threat to life on earth, and with blockchain making new strides, many crypto-related companies are doing what they can to reduce their energy footprint.
Just recently, the world’s second-largest crypto by market cap, Ethereum, announced its commitment to Proof-of-Stake migration, potentially reducing its energy use by 99.95%, a number that could do wonders for energy use.
Unlocking Global Minigrids to establish a concerted Electricity Management SystemPowered by RedGrid, IOEN ( pronounced ION) is a new technology based on the Holochain network. Holochain, already known for scalable and energy-efficient solutions, is a no-brainer for such a project. With IOEN, the team is looking at the energy problem not just from the creation angle but also from a distribution perspective. Renewable energy sources like solar energy are a great innovation and a step in the right direction for energy savings.
However inefficient distribution on the power grid means that power often doesn’t reach to where it needs to.
IOEN is striving to change that. At its most basic level, IOEN’s objective is to create a system where energy is transferred as efficiently as possible, and what better way to transmit energy to devices than with the devices themselves? The goal is to create a local network of devices called a ‘virtual microgrid’ which will direct energy use to where it is needed the most, effectively optimizing energy distribution as well as possible. These grids will then interconnect creating a global network.
The overall goal of IOEN is to back the creation and expansion of mini- and micro-grids across the world by supplying financing, open-source support tools, and the capability to integrate agent-centric mini-grids into existence. The IOEN Token utilizes Holochain and blockchain technology to serve local and global grid requirements, allowing funding for energy grid stability and facilitating a secure energy value exchange to back it up.
IOEN’s approach is to bring the greater inclusion of exclusive NFTs, the booming world of DeFi, and energy together gamified in a manner like never before. It aims to deliver unique value propositions to energy providers and blockchain players in the cryptoverse.
With a solid roadmap, it is impending working with real-world entities as it gears up for the mainnet launch in early 2023. The stellar team is live on various social media channels and is currently working to create an informative whitepaper to explain the dynamics of the project to the community.
IOEN and IOEC TokensUsing two tokens, IOEN and IOEC the team plans to allow for management of the network by those with a stake in it.
The former, the IOEN token (not to be confused with the IOEN network) will be primarily used to represent a stake in the network. IOEN agents who hold a stake in the network will then be permitted to hold IOEC which functions as network ‘credit’. These credits can be used to pay for associated costs linked with the running of the overall system.
You're reading Ioen: Solving The Energy Crisis With Interconnected Devices
Save Serious Money With A Business Energy Audit
Sluggish sales and hard-to-get loans may blight the business landscape, but cutting energy waste can bring a big payoff to a small company. To shave liabilities off your profit-and-loss statement, aim to slash your power consumption instead of your workforce or the crucial projects that could help your company expand.
“We all have incentives to manage our utility bill, but many people don’t try because they don’t know how,” says Geoff Overland, who runs IT and data-center programs for Wisconsin’s statewide Focus on Energy program. “By efficiently managing our energy, we have an immediate impact on our bottom line.”
DIY or Hire a Pro?Although you won’t find a one-size-fits-all approach to an energy audit, you will have to assign someone to be in charge of the project.
If your headquarters is at home or in a similarly small space, free online tools for residential audits will walk you through the process. Utility companies, Energy Star and similar programs, and groups such as the Residential Energy Services Network offer checklists and online calculators.
Once you gather a years’ worth of utility bills, spending about half an hour with Enercom Energy Depot software will break down your consumption and suggest ways to trim it.
Pacific Gas & Electric on the West Coast, Duke Energy in the South, and other utility companies also provide Web-based audit tools for small businesses operating in spaces ranging from an apartment complex to a small warehouse. If your business has multiple locations, dozens of employees, or specialized needs beyond those of the usual small office or shop, seek a professional auditor through your local utility.
Checklists and First StepsAmong the major first steps is to examine doorways, windows, and insulation for leaks of cool or hot air. A building shouldn’t “breathe” unless you’ve opened a window for air. Heating and cooling are the biggest building energy hogs, so perform regular tune-ups and updates on heating and air conditioning systems, and seek Energy Star-rated equipment.
ElectronicsConsumer electronics account for one-fifth of home energy use, according to the government, but no hard-and-fast figure exists for small businesses. The more your company relies on hardware, the more energy it tends to use.
To measure exactly how much each gadget and appliance in your workplace costs in watts and dollars, devices such as the Watts Up or the Kill-a-Watt cost around $100, and some are integrated into power strips. Some utility companies lend watt meters to small businesses for several weeks at no charge.
Studies show that, in a home, electronic devices waste up to one-fifth of their energy consumption on standby power, plugged in but not in use–an easy opportunity for savings.
A typical computer can use 500 kilowatt-hours of electricity or more per year, Overland says. If you pay 8 cents per kilowatt-hour, that’s $40 per year to run just one computer. Multiply that by two or three if you don’t turn off the PC at night.
In Windows 7’s Control Panel, for example, choose Power Options and select the Power Saver plan to turn off a display after 5 minutes of inactivity and then put the PC to sleep after 15 minutes (or even 5 minutes). Advanced options provide more controls, such as setting a laptop to sleep when you close its lid.
Put Electronics on an Energy DietAmong the simplest methods for reducing your devices’ electrical load is keeping them plugged into a surge-protecting power strip that can flip off at the end of the day. Power strips with occupancy sensors, such as those from Belkin, shut down attached peripherals when you step away from a PC.
To put an existing PC on double duty (or more), Ncomputing offers multiuser tools for PCs that enable one computer to serve more than one staffer simultaneously.
Power-controlling devices can reduce the energy consumed by big, old appliances, such as an aging refrigerator in the office kitchen.
In the Data CenterBusinesses that maintain large storage needs or that manage a data center have additional energy concerns. For them, virtualization is a blessing. Overland has seen virtualization save companies $280 and 3500 kilowatt-hours per year per server, thanks to the reduction in server, cooling, and UPS power. Plus, a blade or blade chassis is a wise replacement for other physical servers.
Energy and Money SavedA business renting 2000 square feet of office space in San Francisco could save $1360 by optimizing its electronic devices’ power settings, purchasing Energy Star equipment, and using fluorescent lighting, according to PG&E’s SmartEnergy Analyzer.
If you work at home, look for consumer tax credits for upgrading windows, doors, and heating and cooling systems, as well as for installing solar, wind, fuel cell, or geothermal energy systems.
Commercial-building owners can enjoy federal tax breaks of $1.80 per square foot if they halve their yearly energy costs. States including California offer additional efficiency incentives and rebates.
More ResourcesSolving Business Case Study Assignments
Introduction
Transport and Logistics
Go-ride – Your two-wheeler taxi, the indigenous Ojek
Go-car – Comfort on wheels. Sit back. Sleep. Snore.
Go-send – Send or get packages delivered within hours.
Go-box – Moving out? We’ll do the weights.
Go-bluebird – Ride exclusive with the Bluebird.
Go-transit -Your commute assistant, with or without Gojek
Food & Shopping
Go-mall – Shop from an online marketplace
Go-mart – Home delivery from nearby stores
Go-med – Buy medicines, vitamins, etc from licensed pharmacies.
Payments
Go-pay – Drop the wallet and go cashless
Go-bills – Pay bills, quick and simply
Paylater – Order now pay later.
Go-pulsa – Data or talk time, top-up on the go.
Go-sure – Insure things you value.
Go-give – Donate for what matters, touch lives.
Go-investasi – Invest smart, save better.
Daily needs
GoFitness allows users to access exercises such as yoga, pilates, pound fit, barre, muay thai and Zumba.
Business
Go-biz – A merchant #SuperApp to run and grow business.
News & Entertainment
Go-tix – Book your show, Skip the queue.
Go-play – App for movies and series.
Go-games – Gaming tips trends etc
Go-news – Top news from top aggregators.
Data generated through these services is enormous and GO team has engineering solutions to tackle with day to day data engineering issues. Central Analytics and Science Team(CAST) enables multiple products within the Gojek ecosystem to efficiently use the abundance of data involved in the working of the app. The team has analysts, data scientists, data engineers, business analysts, and decision scientists working on developing in-house deep analytics solutions and other ML systems.
The analysts’ role is concentrated on solving day-to-day business problems, having good business knowledge, creating impact, deriving insights, RCA’s(root cause analysis), and keeping top management informed on micro as well as macro metrics, and product decisions to address business problems.
Learning Objectives
RCA on growth drivers and headwinds faced by the organizations.
Using Pandas for EDA, slicing, and dicing.
Marketing budget optimization
Profits as the north star metric(L0 metric)
Using Pulp solver to solve LP.
Writing LP problems using Pulps with clear and crisp instructions.
Linear regression and cross-validation
Simple regression exercise using the steps provided in the questionnaire.
This article was published as a part of the Data Science Blogathon.
Problem Statement Part IGOJEK directors have asked BI analysts to look at the data to understand what happened during Q1 2023 and what they should do to maximize the revenue for Q2 2023.
Given the data in Problem A, what are the main problems that we need to focus on?
Given the data in Table B, how will you maximize the profit if we only have a budget of IDR 40,000,000,000?
Present your findings and concrete solutions for a management meeting.
Part II
Problem Using multiple linear regression, predict the total_cbv.
Create 1 model for each service.
Forecast period = 2023-03-30, 2023-03-31, and 2023-04-01
Train period = the rest List of predictors to use:
Day of month
Month
Day of week
Weekend/weekday flag (weekend = Saturday & Sunday)
Pre-processing (do it in this order):
Remove GO-TIX
Keep only `Cancelled` order_status
Ensure the complete combinations (cartesian product) of date and service are present
Impute missing values with 0
Create is_weekend flag predictor (1 if Saturday/Sunday, 0 if other days)
One-hot encode month and day of week predictors
Standardize all predictors into z-scores using the mean and standard deviation from train-period data only
Evaluation metric: MAPE Validation: 3-fold scheme. Each validation fold has the same length as the forecast period.
Question 1 – After all the pre-processing steps, what is the value of all the predictors for service = GO-FOOD, date = 2023-02-28?
Question 2 – Show the first 6 rows of one-hot encoded variables (month and day of the week)
Question 3 – Print the first 6 rows of the data after pre-processing for service = GO-KILAT. Sort ascendingly by date
Question 4 – Compute the forecast-period MAPE for each service. Display in ascending order based on the MAPE
Question 5 – Create graphs to show the performance of each validation fold. One graph one service. x = date, y = total_cbv. Color: black = actual total_cbv, other colors = the fold predictions (there should be 3 other colors). Only show the validation period. For example, if rows 11, 12, and 13 were used for validations, then do not show the other rows in the graphs. Clearly show the month and date on the x-axis
Part IIIOur GO-FOOD service in Surabaya performed very well last month – they had 20% more completed orders last month than the month before. The manager of GO-FOOD in Surabaya needs to see what is happening in order to constantly maintain this success for the next month onwards.
What quantitative methods would you use to evaluate the sudden growth? How would you evaluate the customers’ behavior?
Dataset
Part 1
Table A [Link]
Table B[Link]
Part 2 [Link]
The Solution to Part OneBefore beginning to solve, start researching blogs and whitepapers that are present on the company website(links are added below). Company archives provide useful resources that act as guides and help understand what the company stands for or what the company is expecting out of this role. Questions one and three can be considered open-ended problems. Question two is a simple exercise on regression, not necessarily focusing on the best model, but the focus is on the processes involved in building a model.
RCA on Growth Drivers and Headwinds Faced by the OrganizationsImport data:
import pandas as pd import numpy as np import matplotlib.pyplot as plt import os #import csv print("Shape of the df") display(sales_df.shape) print("HEAD") display(sales_df.head()) print("NULL CHECK") display(sales_df.isnull().any().sum()) print("NULL CHECK") display(sales_df.isnull().sum()) print("df INFO") display(sales_df.info()) print("DESCRIBE") display(sales_df.describe())Create pandas datetime from object format. Pandas datetimes is an easy format to work with and manipulate dates. Derive the month column from datetime. Filter out month 4(April) as well. Rename months as Jan, Feb, March.
## convert to date time # convert order_status to strinf ## time_to_pandas_time = ["date"] for cols in time_to_pandas_time: sales_df[cols] = pd.to_datetime(sales_df[cols]) sales_df.dtypes sales_df['Month'] = sales_df['date'].dt.month sales_df.head() sales_df['Month'].drop_duplicates() sales_df[sales_df['Month'] !=4] Q1_2023_df = sales_df[sales_df['Month'] !=4] Q1_2023_df['Month'] = np.where(Q1_2023_df['Month'] == 1,"Jan",np.where(Q1_2023_df['Month'] == 2,"Feb",np.where(Q1_2023_df['Month'] == 3,"Mar","Apr"))) print(Q1_2023_df.head(1)) display(Q1_2023_df.order_status.unique()) display(Q1_2023_df.service.unique()) #import csvAt the group level, overall revenue has grown by 14%. This is a positive outcome. Let’s break this down by various services and identify services that are performing well.
revenue_total.sort_values(["Jan"], ascending=[False],inplace=True) revenue_total.head() revenue_total['cummul1'] = revenue_total["Jan"].cumsum() revenue_total['cummul2'] = revenue_total["Feb"].cumsum() revenue_total['cummul3'] = revenue_total["Mar"].cumsum() top_95_revenue = revenue_total[revenue_total["cummul3"]<=95 ] display(top_95_revenue) ninety_five_perc_gmv = list(top_95_revenue.service.unique()) print(ninety_five_perc_gmv) top_95_revenue_plot = top_95_revenue[["Jan", "Feb", "Mar"]] top_95_revenue_plot.index = top_95_revenue.service top_95_revenue_plot.T.plot.line(figsize=(5,3)) ## share of revenue is changed but has the overall revenue changed for these top 4 services#import csv
For all three months, Ride, Food, Shop, and Send contribute to more than 90% net revenue share.(In Jan Ride contributed to 51% of net revenue.)
Hence following the 80:20 rule for the most recent month, we can restrict this analysis to the top 3 services, namely – Ride, Food, Send.
Out of the 11 available services, only 3 contribute to more than 90% of revenue. This is a cause of concern and there is immense opportunity for the rest of the services to grow.
Completed Rides ## NET - completed rides Q1_2023_df_pivot_cbv_4 = Q1_2023_df[Q1_2023_df["order_status"] == "Completed"] Q1_2023_df_pivot_cbv_4 = Q1_2023_df_pivot_cbv_4[Q1_2023_df_pivot_cbv_4.service.isin(ninety_five_perc_gmv)] Q1_2023_df_pivot_cbv = Q1_2023_df_pivot_cbv_4.pivot_table(index='service', columns=['Month' ], values='total_cbv', aggfunc= 'sum') # display(Q1_2023_df_pivot_cbv.head()) Q1_2023_df_pivot_cbv = Q1_2023_df_pivot_cbv[["Jan", "Feb", "Mar"]] for cols in Q1_2023_df_pivot_cbv.columns: Q1_2023_df_pivot_cbv[cols]=(Q1_2023_df_pivot_cbv[cols]/1000000000) display(Q1_2023_df_pivot_cbv) display(Q1_2023_df_pivot_cbv.T.plot()) ## We see that go shop as reduced its revenue but others the revenue is constant. Q1_2023_df_pivot_cbv_4 = Q1_2023_df_pivot_cbv Q1_2023_df_pivot_cbv_4.reset_index(inplace = True) Q1_2023_df_pivot_cbv_4["Feb_jan_growth"] = (Q1_2023_df_pivot_cbv_4.Feb / Q1_2023_df_pivot_cbv_4.Jan -1)*100 Q1_2023_df_pivot_cbv_4["Mar_Feb_growth"] = (Q1_2023_df_pivot_cbv_4.Mar / Q1_2023_df_pivot_cbv_4.Feb -1)*100 display(Q1_2023_df_pivot_cbv_4)#import csv
Ride – which is the revenue-driving engine has grown by 19%(Jan to March) compared to Send which has grown by 25%.
Food has degrown by 7%, given food delivery as a business is growing around the globe, and this is a major cause of concern.
Canceled Rides(Lost Opportunity) Q1_2023_df_pivot_cbv = Q1_2023_df[Q1_2023_df["order_status"] != "Completed"] Q1_2023_df_pivot_cbv = Q1_2023_df_pivot_cbv.pivot_table(index='service', columns=['Month' ], values='total_cbv', aggfunc= 'sum') Q1_2023_df_pivot_cbv = Q1_2023_df_pivot_cbv[["Jan", "Feb", "Mar"]] revenue_total = pd.DataFrame() for cols in Q1_2023_df_pivot_cbv.columns: revenue_total[cols]=(Q1_2023_df_pivot_cbv[cols]/Q1_2023_df_pivot_cbv[cols].sum())*100 revenue_total.reset_index(inplace = True) display(revenue_total.head()) overall_cbv = Q1_2023_df_pivot_cbv.sum() print(overall_cbv) overall_cbv.plot() plt.show() overall_cbv = Q1_2023_df_pivot_cbv.sum() overall_cbv_df = pd.DataFrame(data = overall_cbv).T display(overall_cbv_df) overall_cbv_df["Feb_jan_growth"] = (overall_cbv_df.Feb / overall_cbv_df.Jan -1)*100 overall_cbv_df["Mar_Feb_growth"] = (overall_cbv_df.Mar / overall_cbv_df.Feb -1)*100 display(overall_cbv_df) revenue_total.sort_values(["Jan"], ascending=[False],inplace=True) revenue_total.head() revenue_total['cummul1'] = revenue_total["Jan"].cumsum() revenue_total['cummul2'] = revenue_total["Feb"].cumsum() revenue_total['cummul3'] = revenue_total["Mar"].cumsum() top_95_revenue = revenue_total[revenue_total["cummul3"]<=95 ] display(top_95_revenue) ninety_five_perc_gmv = list(top_95_revenue.service.unique()) print(ninety_five_perc_gmv)
Lost revenue has grown by 6%.
Directors can increase their efforts to reduce this to less than 5%.
Analysis of Orders Q1_2023_df_can_com = Q1_2023_df[Q1_2023_df.order_status.isin(["Cancelled", "Completed"])] Q1_2023_df_can_com = Q1_2023_df_can_com[Q1_2023_df_can_com.service.isin(ninety_five_perc_gmv)] Q1_2023_df_pivot = Q1_2023_df_can_com.pivot_table(index='service', columns=['order_status','Month' ], values='num_orders', aggfunc= 'sum') Q1_2023_df_pivot.fillna(0, inplace = True) multi_tuples =[ ('Cancelled', 'Jan'), ('Cancelled', 'Feb'), ('Cancelled', 'Mar'), ('Completed', 'Jan'), ('Completed', 'Feb'), ('Completed', 'Mar')] multi_cols = pd.MultiIndex.from_tuples(multi_tuples, names=['Experiment', 'Lead Time']) Q1_2023_df_pivot = pd.DataFrame(Q1_2023_df_pivot, columns=multi_cols) display(Q1_2023_df_pivot.columns) display(Q1_2023_df_pivot.head(3)) Q1_2023_df_pivot.columns = ['_'.join(col) for col in Q1_2023_df_pivot.columns.values] display(Q1_2023_df_pivot) #import csv Q1_2023_df_pivot["jan_total"] = Q1_2023_df_pivot.Cancelled_Jan + Q1_2023_df_pivot.Completed_Jan Q1_2023_df_pivot["feb_total"] = Q1_2023_df_pivot.Cancelled_Feb + Q1_2023_df_pivot.Completed_Feb Q1_2023_df_pivot["mar_total"] = Q1_2023_df_pivot.Cancelled_Mar + Q1_2023_df_pivot.Completed_Mar Q1_2023_df_pivot[ "Cancelled_Jan_ratio" ] =Q1_2023_df_pivot.Cancelled_Jan/Q1_2023_df_pivot.jan_total Q1_2023_df_pivot[ "Cancelled_Feb_ratio" ]=Q1_2023_df_pivot.Cancelled_Feb/Q1_2023_df_pivot.feb_total Q1_2023_df_pivot[ "Cancelled_Mar_ratio" ]=Q1_2023_df_pivot.Cancelled_Mar/Q1_2023_df_pivot.mar_total Q1_2023_df_pivot[ "Completed_Jan_ratio" ]=Q1_2023_df_pivot.Completed_Jan/Q1_2023_df_pivot.jan_total Q1_2023_df_pivot[ "Completed_Feb_ratio" ]=Q1_2023_df_pivot.Completed_Feb/Q1_2023_df_pivot.feb_total Q1_2023_df_pivot[ "Completed_Mar_ratio" ] =Q1_2023_df_pivot.Completed_Mar/Q1_2023_df_pivot.mar_total Q1_2023_df_pivot_1 = Q1_2023_df_pivot[["Cancelled_Jan_ratio" ,"Cancelled_Feb_ratio" ,"Cancelled_Mar_ratio" ,"Completed_Jan_ratio" ,"Completed_Feb_ratio" ,"Completed_Mar_ratio"]] Q1_2023_df_pivot_1
In March, Food, Ride, Send had 17%,15%, and 13% of total orders canceled respectively.
Food has increased its order completion rate, from 69% in January to 83% in March. This is a significant improvement.
## column wise cancellation check if increased perc_of_cols_orders = pd.DataFrame() for cols in Q1_2023_df_pivot.columns: perc_of_cols_orders[cols]=(Q1_2023_df_pivot[cols]/Q1_2023_df_pivot[cols].sum())*100 perc_of_cols_orders perc_of_cols_cbv.T.plot(kind='bar', stacked=True) perc_of_cols_orders.T.plot(kind='bar', stacked=True)
In March, of all the rides canceled, Ride has 72% share of orders, followed by Food(17%) and send(6%).
Summary of Findings and Recommendations for Business Analytics
Ride –
The top contributor to revenue.
Cancellation(GMV) in March has grown by 42%
Reduce cancelations through product intervention and new product features.
Food –
Canceled orders have increased, but due to cost optimization, GMV loss has been successfully arrested.
Increase net revenue by reducing costs and cancellations.
Drive higher customer acquisition.
Send –
Canceled GMV and orders, both have taken a hit and are a major cause of concern.
Good ride completion experience, thus, increasing retention and powering revenue growth through retention.
Maximize Profits By Optimizing Budget SpendsThe Business team has a budget of 40 Billon for Q2 and it has set growth targets for each service. For each service, the cost of incremental 100 rides and the maximum growth target in Q2 is given below. For Go-Box, to get 100 more bookings, it costs 40M, and the maximum growth target in Q2 is 7%.
Import budget data and use sales data from the above analysis.
print(“Shape of the df”) display(budget_df.shape)
print(“HEAD”) display(budget_df.head())
print(“NULL CHECK”) display(budget_df.isnull().any().sum())
print(“NULL CHECK”) display(budget_df.isnull().sum())
print(“df INFO”) display(budget_df.info())
print(“DESCRIBE”) display(budget_df.describe())
## convert to date time # convert order_status to string ##
time_to_pandas_time = [“date”]
for cols in time_to_pandas_time: sales_df[cols] = pd.to_datetime(sales_df[cols])
sales_df.dtypes
sales_df[‘Month’] = sales_df[‘date’].dt.month sales_df.head()
sales_df[‘Month’].drop_duplicates()
sales_df_q1 = sales_df[sales_df[‘Month’] !=4] ## Assumptions sales_df_q1 = sales_df_q1[sales_df_q1[“order_status”] == “Completed”]
# Q1_2023_df_pivot = Q1_2023_df.pivot_table(index=’service’, columns=[‘order_status’,’Month’ ], values=’num_orders’, aggfunc= ‘sum’)
sales_df_q1_pivot = sales_df_q1.pivot_table(index=’service’, columns=[‘order_status’], values=’total_cbv’, aggfunc= ‘sum’) sales_df_q1_pivot_orders = sales_df_q1.pivot_table(index=’service’, columns=[‘order_status’], values=’num_orders’, aggfunc= ‘sum’)
sales_df_q1_pivot.reset_index(inplace = True) sales_df_q1_pivot.columns = [“Service”,”Q1_revenue_completed”] sales_df_q1_pivot
sales_df_q1_pivot_orders.reset_index(inplace = True) sales_df_q1_pivot_orders.columns = [“Service”,”Q1_order_completed”]
optimization_Df = pd.merge( sales_df_q1_pivot, budget_df, how=”left”, on=”Service”,
)
optimization_Df = pd.merge( optimization_Df, sales_df_q1_pivot_orders, how=”left”, on=”Service”,
)
optimization_Df.columns = [“Service”, “Q1_revenue_completed”, “Cost_per_100_inc_booking”, “max_q2_growth_rate”,”Q1_order_completed”] optimization_Df.head(5) #import csv
For Box, Q1 revenue is 23B, the cost for incremental 100 rides is 40M, its maximum expected growth rate is 7% and 63K total rides were completed @ 370K per order.
Is it possible to achieve the maximum growth rate for all the services with an available budget of 40B?
## If all service max growth is to be achived what is the budget needed? and whats the deficiet? optimization_Df["max_q2_growth_rate_upd"] = optimization_Df['max_q2_growth_rate'].str.extract('(d+)').astype(int) ## extract int from string optimization_Df["max_growth_q2_cbv"] = (optimization_Df.Q1_order_completed *(1+ optimization_Df.max_q2_growth_rate_upd/100)) ## Q2 max orders based on Q1 orders optimization_Df["abs_inc_orders"] = optimization_Df.max_growth_q2_cbv-optimization_Df.Q1_order_completed ## Total increase in orders optimization_Df["cost_of_max_inc_q2_order"] = optimization_Df.abs_inc_orders * optimization_Df.Cost_per_100_inc_booking /100 ## Total Cost to get maximum growth for each serivce display(optimization_Df) display(budget_df[budget_df["Service"] == "Budget:"].reset_index()) budget_max = budget_df[budget_df["Service"] == "Budget:"].reset_index() budget_max = budget_max.iloc[:,2:3].values[0][0] print("Budget difference by") display(budget_max-optimization_Df.cost_of_max_inc_q2_order.sum() ) ## Therefore max of the everything cannot be achieved#import csvThe answer is No. 247B(247,244,617,204) more budget is required to achieve growth targets for all services.
Is it possible to achieve at least 10% of the maximum growth rate for all the services with an available budget of 40B?
## Then what is the budget needed and what will the extra budget at hand?? optimization_Df["min_10_max_growth_q2_cbv"] = (optimization_Df.Q1_order_completed *(1+ optimization_Df.max_q2_growth_rate_upd/1000)) ## atleast 10% of max if achieved, this is orders optimization_Df["min_10_abs_inc_orders"] = optimization_Df.min_10_max_growth_q2_cbv-optimization_Df.Q1_order_completed ## what is the increase in orders needed to achieve 10% orders growth optimization_Df["min_10_cost_of_max_inc_q2_order"] = optimization_Df.min_10_abs_inc_orders * optimization_Df.Cost_per_100_inc_booking /100 ## Cost associatedfor 10% increase in orders display(budget_max-optimization_Df.min_10_cost_of_max_inc_q2_order.sum() ) ## Total budget remaining display((budget_max-optimization_Df.min_10_cost_of_max_inc_q2_order.sum())/budget_max) ## Budget utilization percentage optimization_Df["perc_min_10_max_growth_q2_cbv"] =( ( optimization_Df.max_q2_growth_rate_upd/1000)) ## atleast 10% of max if achieved, 7 to percent divide by 100, 10% of this number. divide by 10, so 1000 optimization_Df["perc_max_growth_q2_cbv"] =( ( optimization_Df.max_q2_growth_rate_upd/100)) ## Max growth to be achieved optimization_Df["q1_aov"] = optimization_Df.Q1_revenue_completed/optimization_Df.Q1_order_completed ## Q1 average order value optimization_Df["order_profitability"] = 0.1 ## this is assumption that 10% will be profit optimization_Df["a_orders_Q2"] = (optimization_Df.Q1_order_completed *(1+ optimization_Df.perc_min_10_max_growth_q2_cbv)) ## based on 10% growth, total new orders for qc optimization_Df["a_abs_inc_orders"] = optimization_Df.a_orders_Q2-optimization_Df.Q1_order_completed optimization_Df["a_Q2_costs"] = optimization_Df.Cost_per_100_inc_booking* optimization_Df.a_abs_inc_orders/100 ##There is scope for improvement here, so This can be adjusted based on revenue or ranking from Q1 display(budget_max - optimization_Df.a_Q2_costs.sum()) optimization_Df#import csvThe answer is Yes. With only 28% of the available 40B budget, this can be achieved. Underutilization of the available budget is never an option, and no business leader would use only 28% of the available budget.
So, the maximum growth across all services cannot be achieved, and achieving 10% of the maximum growth rate will lead to an underutilized budget. Hence the need here is to optimize spending such that:
The overall cash burn doesn’t cross 40B.
The overall growth rate in Q2 across services is equal to or below the maximum growth rate.
There are called constraints in Linear optimization.
The objective is to Maximize profits.
Assumptions used here:
Every service has a profit of 10%.
AOV(revenue/orders) will remain the same as in Q1.
Pre-optimization data pipeline:
## Data prep for pulp optimization perc_all_df = pd.DataFrame(data = list(range(1,optimization_Df.max_q2_growth_rate_upd.max()+1)), columns = ["growth_perc"]) ## create a list of all percentage growth, from 1 to max to growth expected, this is to create simulation for optimization display(perc_all_df.head(1)) optimization_Df_2 = optimization_Df.merge(perc_all_df, how = "cross") ## cross join with opti DF ## Filter and keeping all percentgaes upto maximum for each service ## Minimum percentage kept is 1 optimization_Df_2["abs_profit"] = (optimization_Df_2.q1_aov)*(optimization_Df_2.order_profitability) optimization_Df_3 = optimization_Df_2[optimization_Df_2["filter_flag"] == 1] display(optimization_Df_3.head(1)) display(optimization_Df_3.columns) ## Filter columns needed optimization_Df_4 = optimization_Df_3[[ 'Service', ## services offered 'Cost_per_100_inc_booking', ## cost of additional 100 orders 'Q1_order_completed', ## to calculate q2 growth based on q1 orders 'perc_min_10_max_growth_q2_cbv', ## minimum growth percent need 'perc_max_growth_q2_cbv', ## max growth percent allowed 'abs_profit', ## profit per order 'growth_perc' ## to simulative growth percet across ]] display(optimization_Df_4.head(2)) optimization_Df_4["orders_Q2"] = (optimization_Df_4.Q1_order_completed *(1+ optimization_Df_4.growth_perc/100)) ## based on growth, total new orders for qc optimization_Df_4["abs_inc_orders"] = optimization_Df_4.orders_Q2-optimization_Df_4.Q1_order_completed optimization_Df_4["profit_Q2_cbv"] = optimization_Df_4.orders_Q2 * optimization_Df_4.abs_profit optimization_Df_4["growth_perc"] = optimization_Df_4.growth_perc/100 optimization_Df_4["Q2_costs"] = optimization_Df_4.Cost_per_100_inc_booking* optimization_Df_4.abs_inc_orders/100 display(optimization_Df_4.head()) optimization_Df_5 = optimization_Df_4[[ 'Service', ## services offered 'Q2_costs', ## cost total for the growth expected 'perc_min_10_max_growth_q2_cbv', ## minimum growth percent need 'perc_max_growth_q2_cbv', ## max growth percent allowed 'profit_Q2_cbv', ## total profit at the assumed order_profitability rate 'growth_perc' ## to simulative growth percet across ]] optimization_Df_5 display(optimization_Df_5.head(10)) display(optimization_Df_5.shape) Understanding the Optimization Dataset
Service – Go product.
10% of max growth, is the minimum growth that each service should achieve. So Box should at least achieve 0.7% growth.
This is a constraint.
Max growth decided by business leaders for Box is 7%.
This is a constraint.
For Box, 1% to 7% is the range of growth.1% is more than 0.7% and 7% is the maximum. The optimizer will choose the best growth rate based on constraints.
This is a decision variable. The algorithm will pick one among 7.
For 1% growth(Incremental), the cash burn is 255M.
This is a constraint.
If incremental growth is 1%, then overall profit(organic + inorganic) is 2.4B.
This is the objective.
## Best optimization for our case case. This is good. prob = LpProblem("growth_maximize", LpMaximize) ## Initialize optimization problem - Maximization problem optimization_Df_5.reset_index(inplace = True, drop = True) markdowns = list(optimization_Df_5['growth_perc'].unique()) ## List of all growth percentages cost_v = list(optimization_Df_5['Q2_costs']) ## List of all incremental cost to achieve the growth % needed perc_min_10_max_growth_q2_cbv = list(optimization_Df_5['perc_min_10_max_growth_q2_cbv']) growth_perc = list(optimization_Df_5['growth_perc']) ## lp variables low = LpVariable.dicts("l_", perc_min_10_max_growth_q2_cbv, lowBound = 0, cat = "Continuous") growth = LpVariable.dicts("g_", growth_perc, lowBound = 0, cat = "Continuous") delta = LpVariable.dicts ("d", markdowns, 0, 1, LpBinary) x = LpVariable.dicts ("x", range(0, len(optimization_Df_5)), 0, 1, LpBinary) ## objective function - Maximise profit, column name - profit_Q2_cbv ## Assign value for each of the rows - ## For all rows in the table each row will be assidned x_0, x_1, x_2 etc etc ## This is later used to filter the optimal growth percent prob += lpSum(x[i] * optimization_Df_5.loc[i, 'profit_Q2_cbv'] for i in range(0, len(optimization_Df_5))) ## one unique growth percentahe for each service ## Constraint one for i in optimization_Df_5['Service'].unique(): prob += lpSum([x[idx] for idx in optimization_Df_5[(optimization_Df_5['Service'] == i) ].index]) == 1 ## Do not cross total budget ## Constraint two prob += (lpSum(x[i] * optimization_Df_5.loc[i, 'Q2_costs'] for i in range(0, len(optimization_Df_5))) - budget_max) <= 0 ## constraint to say minimum should be achived for i in range(0, len(optimization_Df_5)): prob.writeLP('markdown_problem') ## Write Problem name prob.solve() ## Solve Problem display(LpStatus[prob.status]) ## Problem status - Optimal, if problem solved successfully display(value(prob.objective)) ## Objective, in this case what is the maximized profit with availble budget - 98731060158.842 @ 10% profit per order #import csv print(prob) print(growth) Understanding How to Write An LP Problem is Key to Solving it
Initialize the problem
prob = LpProblem(“growth_maximize”, LpMaximize)
growth_maximize is the name of the problem.
LpMaximize is letting the solver know that it’s a maximization problem.
Create a variable of the decision function
growth = LpVariable.dicts(“g_”, growth_perc, lowBound = 0, cat = “Continuous”)
For Pulp, pulp dicts needs to be created
g_ is the prefix for the variable.
growth_perc is the name of the list
low bound is the minimum growth percent, it can start from 0.
The variable is continuous.
There are 60 unique growth percentages from 1%(minimum) to 60%(maximum). (Food has a 60% maximum growth rate).
Variables – 0 <= x_0 <= 1 Integer for row 0 to 0 <= x_279 <= 1 Integer for row 279.
Add objective function to the problem
prob += lpSum(x[i] * optimization_Df_5.loc[i, ‘profit_Q2_cbv’] for i in range(0, len(optimization_Df_5)))
Add constraint:
One – One growth percentage for each service
for i in optimization_Df_5[‘Service’].unique(): prob += lpSum([x[idx] for idx in optimization_Df_5[(optimization_Df_5[‘Service’] == i) ].index]) == 1
For each service, only select one growth percent.
For Box out of 1 to 7 select only one.
The equation for box – _C1: x_0 + x_1 + x_2 + x_3 + x_4 + x_5 + x_6 = 1
The equation for GLAM – _C2: x_10 + x_11 + x_12 + x_13 + x_14 + x_15 + x_16 + x_7 + x_8 + x_9 = 1
As there are 11 services so 11 constraints are created, one for each service.
Two – Do not cross the total budget of 40B
prob += (lpSum(x[i] * optimization_Df_5.loc[i, ‘Q2_costs’] for i inrange(0, len(optimization_Df_5))) – budget_max) <= 0
The sum of all costs minus the total budget should be less than or equal to zero.
Equation _C12: 255040000 x_0 + 510080000 x_1 + …. + 16604 x_279 <= 0
_C12: is the only constraint here because, there is one total budget of 40B, and there is no constraint on how much each service can spend.
Three – constraint to say minimum should be achieved
For each row, the minimum growth percent constraint equation is created. There are 279 rows, so 279 constraints are created.
“Optimal‘” is the desired output.
display(LpStatus[prob.status])
98731060158.842 is the maximized profit.
display(value(prob.objective))
var_name = [] var_values = [] for variable in prob.variables(): if 'x' in variable.name: var_name.append(variable.name) var_values.append(variable.varValue) results = pd.DataFrame() results['variable_name'] = var_name results['variable_values'] = var_values results['variable_name_1'] = results['variable_name'].apply(lambda x: x.split('_')[0]) results['variable_name_2'] = results['variable_name'].apply(lambda x: x.split('_')[1]) results['variable_name_2'] = results['variable_name_2'].astype(int) results.sort_values(by='variable_name_2', inplace=True) results.drop(columns=['variable_name_1', 'variable_name_2'], inplace=True) results.reset_index(inplace=True) results.drop(columns='index', axis=1, inplace=True) # results.head() optimization_Df_5['variable_name'] = results['variable_name'].copy() optimization_Df_5['variable_values'] = results['variable_values'].copy() optimization_Df_5['variable_values'] = optimization_Df_5['variable_values'].astype(int)# optimization_Df_6.head() #import csv## with no budget contraint optimization_Df_10 = optimization_Df_5[optimization_Df_5['variable_values'] == 1].reset_index() display(optimization_Df_10) display(budget_max - optimization_Df_10.Q2_costs.sum()) display( optimization_Df_10.Q2_costs.sum())
The maximum growth rate for respective services is in the chart above. For Box it’s 1%, for Clean it’s 1%, for Food it’s 17%, etc.
The total cash burn is – 39999532404.0
Underutilized budget – 467596.0
Maximized profit – 98731060158.0
The Solution to Part Twotime_to_pandas_time = [“date”]
for cols in time_to_pandas_time: sales_df[cols] = pd.to_datetime(sales_df[cols])
sales_df[‘Month’] = sales_df[‘date’].dt.month
Q1_2023_df = sales_df[sales_df[‘Month’] !=900]
Q1_2023_df[‘Month’] = np.where(Q1_2023_df[‘Month’] == 1,”Jan”,np.where(Q1_2023_df[‘Month’] == 2,”Feb”,np.where(Q1_2023_df[‘Month’] == 3,”Mar”,”Apr”)))
Q1_2023_df[‘test_control’] = np.where(Q1_2023_df[‘date’] <= “2023-03-30″,”train”, “test”)
display(Q1_2023_df.head(5))
display(Q1_2023_df.order_status.unique())
display(Q1_2023_df.service.unique())
display(Q1_2023_df.date.max()) #import csv
Import dataset
Convert date to pandas datetime
Derive month columns
Derive train and test columns
display(Q1_2023_df.head()) display(Q1_2023_df.date.max()) Q1_2023_df_2 = Q1_2023_df[Q1_2023_df["date"] <= "2023-04-01"] display(Q1_2023_df_2.date.max()) Q1_2023_df_2 = Q1_2023_df_2[Q1_2023_df["order_status"] == "Cancelled"] Q1_2023_df_date_unique = Q1_2023_df_2[["date"]].drop_duplicates() Q1_2023_df_date_service = Q1_2023_df_2[["service"]].drop_duplicates() Q1_2023_df_CJ = Q1_2023_df_date_unique.merge(Q1_2023_df_date_service, how = "cross") ## cross join with opti DF display(Q1_2023_df_date_unique.head()) display(Q1_2023_df_date_unique.shape) display(Q1_2023_df_date_unique.max()) display(Q1_2023_df_date_unique.min()) display(Q1_2023_df_2.shape) Q1_2023_df_3 = Q1_2023_df_CJ.merge(Q1_2023_df_2, on=['date','service'], how='left', suffixes=('_x', '_y')) display(Q1_2023_df_3.head()) display(Q1_2023_df_3.shape) display(Q1_2023_df_CJ.shape) Q1_2023_df_3["total_cbv"].fillna(0, inplace = True) print("Null check ",Q1_2023_df_3.isnull().values.any()) nan_rows = Q1_2023_df_3[Q1_2023_df_3['total_cbv'].isnull()] nan_rows display(Q1_2023_df_3[Q1_2023_df_3.isnull().any(axis=1)]) Q1_2023_df_3["dayofweek"] = Q1_2023_df_3["date"].dt.dayofweek Q1_2023_df_3["dayofmonth"] = Q1_2023_df_3["date"].dt.day Q1_2023_df_3["Is_Weekend"] = Q1_2023_df_3["date"].dt.day_name().isin(['Saturday', 'Sunday']) Q1_2023_df_3.head()
Filter for only canceled orders.
For all services, cross join with dates from Jan 01 to Apr 01, so that predictions for all days are available.
Replace NULL with 0.
Derive day of the month
Derive day of the week.
Create binary weekend/weekday column
Q1_2023_df_4 = Q1_2023_df_3[Q1_2023_df_3["service"] != "GO-TIX"] Q1_2023_df_5 = pd.get_dummies(Q1_2023_df_4, columns=["Month","dayofweek"]) display(Q1_2023_df_5.head()) import numpy as np import pandas as pd # from sklearn.datasets import load_boston from sklearn.preprocessing import Normalizer from sklearn.preprocessing import StandardScaler from sklearn.linear_model import LinearRegression from sklearn.model_selection import KFold from sklearn.model_selection import cross_val_score from numpy import mean from numpy import std from sklearn.metrics import make_scorer from sklearn.model_selection import cross_val_predict Q1_2023_df_5.columns all_columns = ['date', 'service', 'num_orders', 'order_status', 'total_cbv', 'test_control', 'dayofmonth', 'Is_Weekend', 'Month_Apr', 'Month_Feb', 'Month_Jan', 'Month_Mar', 'dayofweek_0', 'dayofweek_1', 'dayofweek_2', 'dayofweek_3', 'dayofweek_4', 'dayofweek_5', 'dayofweek_6'] model_variables = [ 'dayofmonth', 'Is_Weekend', 'Month_Apr', 'Month_Feb', 'Month_Jan', 'Month_Mar', 'dayofweek_0', 'dayofweek_1', 'dayofweek_2', 'dayofweek_3', 'dayofweek_4', 'dayofweek_5', 'dayofweek_6'] target_Variable = ["total_cbv"] all_columns = ['service', 'test_control', 'dayofmonth', 'Is_Weekend', 'Month_Apr', 'Month_Feb', 'Month_Jan', 'Month_Mar', 'dayofweek_0', 'dayofweek_1', 'dayofweek_2', 'dayofweek_3', 'dayofweek_4', 'dayofweek_5', 'dayofweek_6']
Filter out GO-TIX
One hot encode – Month and day of the week
Import all the necessary libraries
Create a list of columns, train, predictor, etc.
model_1 = Q1_2023_df_5[Q1_2023_df_5["service"] =="GO-FOOD"] test = model_1[model_1["test_control"]!="train"] train = model_1[model_1["test_control"]=="train"] X = train[model_variables] y = train[target_Variable] train_predict = model_1[model_1["test_control"]=="train"] x_ = X[model_variables] sc = StandardScaler() X_train = sc.fit_transform(X) X_test = sc.transform(x_)
Filter data for one service – GO-FOOD
Create train and test dataframes
Create X – with train columns, and y with predictor column.
Use Standardscalar for z-score transformation.
#define custom function which returns single output as metric score def NMAPE(y_true, y_pred): return 1 - np.mean(np.abs((y_true - y_pred) / y_true)) * 100 #make scorer from custome function nmape_scorer = make_scorer(NMAPE) # prepare the cross-validation procedure cv = KFold(n_splits=3, random_state=1, shuffle=True) # create model model = LinearRegression() # evaluate model scores = cross_val_score(model, X, y, scoring=nmape_scorer, cv=cv, n_jobs=-1) # report performance print('Accuracy: %.3f (%.3f)' % (mean(scores), std(scores))) y_pred = cross_val_predict(model, X, y, cv=cv)
cross_val_score doesn’t have MAPE as an in-built scorer, so define MAPE.
Create CV instance
Create LR instance
Use cross_val_score to get the average MAPE scores across CV Folds for GO-Foods.
For each service, this code can be lopped, create a function to create
def go_model(Q1_2023_df_5, go_service,model_variables,target_Variable): """ Q1_2023_df_5 go_service model_variables target_Variable """ model_1 = Q1_2023_df_5[Q1_2023_df_5["service"] ==go_service] test = model_1[model_1["test_control"]!="train"] train = model_1[model_1["test_control"]=="train"] X = train[model_variables] y = train[target_Variable] train_predict = model_1[model_1["test_control"]=="train"] x_ = X[model_variables] X_train = sc.fit_transform(X) X_test = sc.transform(x_) # prepare the cross-validation procedure cv = KFold(n_splits=3, random_state=1, shuffle=True) # create model model = LinearRegression() # evaluate model scores = cross_val_score(model, X, y, scoring=nmape_scorer, cv=cv, n_jobs=-1) # report performance print('Accuracy: %.3f (%.3f)' % (mean(scores), std(scores))) y_pred = cross_val_predict(model, X, y, cv=cv) return y_pred,mean(scores), std(scores) a,b,c = go_model(Q1_2023_df_5, "GO-FOOD",model_variables,target_Variable) b
Modeling steps converted to a function:
Q1_2023_df_5 – Base data
go_service – go-tix, go-send etc
model_variables – variables used to train the model
target_Variable – predictor variable(total_cbv).
For each service, the method can be run to get the average forecast MAPE across all 11 services.
The Solution to Part ThreeQuestion 3 is an open-ended question and readers are encouraged to solve it on their own. Some of the hypotheses are:
As this is specific to one particle area and geography, it’s safe to assume that the APP more or less remained the same, and product interventions could have played only a minor role. And if there was product intervention, it was just specific to this particular area.
Good quality/famous restaurants and food chains were onboarded, and users now have lot of good selection to order from or order from familiar restaurants.
The delivery speed was significantly improved by onboarding a higher number of delivery agents.
Re-trained delivery agents effectively to reduce cancelations.
Worked with restaurant partners, to handle peak-time chaos in a better way.
Useful Resources and References
Working In The ‘Central Analytics and Science Team’
How We Estimate Food Debarkation Time With ‘Tensoba’
Business Case Study Assignments For Entry Level Data Analysts
Solving Business Case Study Assignments For Data Scientists
Using Data To Appreciate Our Customers
Under the Hood of Gojek’s Automated Forecasting Tool
Experimentation at Gojek
GO-JEK’s Impact for Indonesia
GO-FAST: The Data Behind Ramadan
Pulp optimization.
Linear programming using pulp.
Marketing campaign optimization.
Simple ways to optimize something using python.
ConclusionCase studies, when done right, following the steps given above, will have a positive impact on the business. Recruiters aren’t looking for answers but an approach to those answers, the structure followed, the reasoning used, and business and practical knowledge using business analytics. This article provides an easy-to-follow framework for data analysts using a real business case study as an example.
Key Takeaways:
There are two approaches to answer this case study, bottom-up, top-down. Here, bottom-up approach has been considered, because of unfamiliarity with data and unavailability of business context.
Slicing and dicing the sales numbers across dimensions, identifying trends and patterns across services, is the best approach to figure out the challenges for growth.
Be crisp and to the point, while providing recommendations.
Let the data tell a story, instead of just proving data points – Eg: The top three services contribute towards more than 90% of revenue. While at the group level, growth is on the positive side, at various services, there are challenges with ride completion, driver cancellation, etc. For Food – Reducing cancelations by Y% will drive higher revenues in Q2 by x% etc.
Optimization using pulp is intimidating when there are more than 3 constraints. Writing down an LP problem on a piece of paper, then coding it out will surely make the task easier.
Good luck! Here’s my Linkedin profile if you want to connect with me or want to help improve the article. Feel free to ping me on Topmate/Mentro; you can drop me a message with your query. I’ll be happy to be connected. Check out my other articles on data science and analytics here.
The media shown in this article is not owned by Analytics Vidhya and is used at the Author’s discretion.
Related
It Planning During A Crisis
Without a doubt, 2023 changed everything. I like to compare it to a science fiction movie where time travel is involved. Clearly, we have all ended up permanently on a different space/time continuum.
We will go forward differently. For IT organizations, which do deserve the credit for keeping things running, many things have changed or will be different going forward. This has been the moment to yield results in alignment with the business partners.
The question is: how did IT operate and plan during the crisis? A recent CIOChat explored this topic.
CIOs, to be fair, were not omnipresent and for this reason, have not tried to take a bow. They instead are candid that they didn’t realized a new plan was need until just before or about the point offices were being closed. CIO Marin Davis says that they quickly agreed at this point “it wasn’t so much a re-plan, more of an okay what’s next then.”
CIOs shared openly that no one had a sense for the potential scope of the disaster. CIO David Seidl says, “we started worst case scenario planning lightly a couple of weeks before people went home, but everybody thought it would be weeks of disruption. It was less of a re-plan, and more of a commitment to flexing while kept our eyes on taking care of our community.”
CIO Pedro Martinez Puig similarly claims that “other than realizing that a re-plan was needed, we found that 2023 planning was useless and that we should move to a permanent recalibration, as no history would be appropriate to predict what will come next. Uncharted territories were clearly ahead.”
Many IT organizations were fortunate and didn’t need a re-plan, as they had a plan for a pandemic in place. For them, the crisis was about accelerating their technical roadmap as things proved to be more long-term than short-term. Where organizations were already running Zoom, they setup daily Zoom or Teams meetings. But most say, this got tiring after a few months. For organization that weren’t ready the issue became how do you flatten the expenditures curve at the very same time as you grow support for everyone being remotely.
Analyst Jack Gold claims that “recognition of scope of crisis varied greatly, some assuming it would be a mild challenge at best and for others it was about doing the right thing by accelerating transformations that had stalled. I’d say 20% or organizations hid their head in the sand, 20% were already prepared, and the rest coped in various ways. About April or May, I saw major activity and in June those not already able to work from home panicked and did whatever they had to do, including relaxing security.” Scary!
CIOs says there was less changes than one might imagine. In fact, CIO Dennis Klemenz found a silver lining in this year.
“Stakeholders were much more aware of technology and the way IT transforms the organization. The conversations were different, they weren’t expense discussions but rather were enablement discussions. Some executives even pressed us to spend more! This was refreshing.” Several CIOs were amazed as well with the tone of the conversation. Discussion often started with: How are you? How are you holding up? Is your family doing well? It was a human discussion first and then a strategic versus tactical discussion.
Interesting for those that already had a geographically distributed workforce, the change was minor. For example, Davis says there was “no difference at all, when you have geographically dispersed stakeholders it was always going to involve a lot of video calls and data submission.” With this said, CIO said there was a larger dispersion between best case and worst-case scenarios. And the base case scenarios were different. They were pandemic short, pandemic long, or something else.
Gold says that what he saw “is that many companies went from a regimented strategic planning processes to one of let’s just get whatever we need be done and got out the duct tape to make things work as best they could. Yes, but the real shock came to organizations with relatively co-located groups that could no longer interact in person and probably had few if any remote work tools in place.”
CIOs saw the focus move to customers, the employees, and sustaining operations. Klemenz says that his organization put in the “front of the line collaboration, cybersecurity, safety, and digitalizing customer interactions. They put in the back of the line went efficiency and cost savings projects. Efficiency, however, will need to come back in 2023.”
Collaboration software where it was up to snuff moved higher on most CIOs list. Davis, for example, said that his firm “migrated Teams to a high availability setup and suddenly this became the number 1 priority, along with adding VMs and resources to key gateways. Cybersecurity definitely remained a major focus.”
At the same time, enterprise architecture came to the forefront for many organizations. It was too hard to support diverse tools quickly under pressure. Organizations varied in where they made investments. Many made investments to do remote work better. With this said, Puig suggests that “the hybrid workplace is here to stay, secured and reliable home office spaces took an extra budget hit to ensure remote collaboration, while e-commerce capabilities reigned in the top.”
CIOs clearly are putting dollars where they were needed. Number 1 on the CIOs list is digital enhancements to CX followed closely by collaboration tools and cybersecurity and privacy. Powering CX without question is analytics and AI. This has moved process improvement and automation to a secondary priority.
Klemenz says that a “secondary focus for his organization will be internal process improvement and automation and cost savings and efficiency projects. We are migrating to digital documents and workflows help our customers (no need to physically go sign forms) and employees as they won’t need to “push paper.”
Davis says that “one of the key issues we are starting to see is short term digital actions that are not backed up by connected internal processes.” CIOs clearly are looking to improve remote collaboration both internally and externally. To help, some CIOs have been given budgets that are multi year.
Klemenz says “technology architecture needs have scalability in the design. We don’t buy a VPN device for 10 people if you have 100 people who might need to VPN eventually.” Going forward, CIOs believe that the scenarios they will considered will grow.
For this reason, Sadin says that “continuity should continue as before: identify inherent risk, determine residual risk tolerance, and planning and executing mitigations. Business plus IT needs, however, to rethink the risks and redesign supply chains to handle greater business dislocations. This includes COVID, Climate Change, etc. Hopefully, all business realize the importance of business continuity.”
Finally, CIO Jim Russell says “fully cloud certainly moved forward in various timelines replacing hybrid. And more hopeful users will better understand the next time I come knocking. More support for testing business continuity has to be a by-product of the next normal.”
As we have suggested throughout this piece, this crisis changed us in many ways. No one was completely ready for the change that started last March. And while many were lucky, it is time now to plan for a less certain future and the continuity issues this future will present. This will be a big change for many. But the vanguard CIOs that I work with have shown they are ready to plan and deliver on the corporate future.
About the author:
Myles Suer is Principal Product Marketing Manager for Data at Dell Boomi. He is also the facilitator of the #CIOChat and the number 1 influencer of CIOs.
Rusty Metal Could Be The Battery The Energy Grid Needs
Electricity is highly perishable. If not used at the moment it is created, it rapidly dissipates as heat. Full decarbonization of the electric grid can become a reality only when vast amounts of solar and wind energy can be stored and used at any time. After all, we can’t harness renewable energy sources such as solar and wind 24/7.
At present, lithium-ion batteries make up a considerable chunk of the market for energy storage. But they are expensive, involve mining rare metals, and are far from environmentally sustainable. Finding an alternative that is less ecologically degrading is crucial—and so far, scientists are analyzing replacements for lithium-ion batteries with the help of raw materials such as sodium, magnesium, and even seawater. But in the last few years, the energy industry has been investing in metal-air batteries as a next-generation solution for grid energy storage.
Metal-air batteries were first designed in 1878. The technology uses atmospheric oxygen as a cathode (electron receiver) and a metal anode (electron giver). This anode consists of cheap and abundantly-available metals such as aluminum, zinc, or iron. “These three metals have risen to the top in terms of use in metal-air batteries,” says Yet-Ming Chiang, an electrochemistry professor at the Massachusetts Institute of Technology.
In 1932, zinc-air batteries were the first type of metal-air battery, widely used in hearing aids. Three decades later, NASA and GTE Lab scientists tried to develop iron-air batteries for NASA space systems but eventually gave up. Still, some researchers are chasing after the elusive technology.
The limits, and potential, of metal-air batteriesResearchers believed that, theoretically, metal-air batteries could have higher energy density than lithium-ion batteries for more than six decades. Still, they have repeatedly failed to live up to their full potential in the past.
In a lithium-ion battery, the process of power generation is straightforward. Lithium atoms merely bounce between two electrodes as the battery charges and discharges.
Involving air, however, makes the process more tricky, and adds an added challenge—the difficulty in recharging. Oxygen reacts with the metal, creating a chemical that then sets off the electrolysis process, discharging energy. But instead of a reaction that can go back and forth, in metal-air batteries, the transfer is most of the times only one way. Thanks to the constant flow of atmospheric oxygen into a metal-air battery, once you start it up, the battery can corrode quickly even when left unused and have a stunted shelf life.
Additionally, metal-air batteries’ watt-hours per kilogram—that measures the energy storage per unit of the battery’s mass—is not currently exceptionally high. This is the main reason why electric vehicles now cannot utilize metal-air batteries such as iron-air, Chiang tells Popular Science. “Lithium-ion batteries have 100 watt-hours per kilogram. But for iron-air, it was only 40 watt-hours per kilogram. The rate at which energy is stored and then discharged from the battery is relatively low in comparison,” he says.
[Related: We need safer ways to recycle electric car and cellphone batteries.]
But he argues that despite these limitations, stationary energy storage might utilize iron-air batteries. At a start-up called Form Energy, Chiang and his colleagues have been developing a new, low-cost iron-air battery technology that will provide multi-day storage for renewable energy by 2024.
“Even though it did not work out for EVs, iron-air batteries can be commercially scaled up for energy storage and help mitigate climate change by mid-century,” adds Chiang, who is also chief science officer at Form Energy.
New designs for metal-air batteriesChiang’s team fine-tuned the process of “reverse rusting” in their battery technology that efficiently stores and releases energy. As the iron chemically oxidizes, it loses electrons sent through the battery’s external circuit to its air electrode. Atmospheric oxygen becomes hydroxide ions at the air electrode and then crosses over to the iron electrode, forming iron hydroxide, which eventually becomes rust.
“When you reverse the electrical current on the battery, it un-rusts the battery. Depending on whether the battery is discharging or charging, the electrons are either taken away from or added to the iron,” explains Chiang. He claims that the battery can deliver clean electricity for 100 hours at a price of only $20 kilowatts per hour—a bargain compared to lithium-ion batteries, which cost up to $200/kWh.
But iron isn’t the only metal on the rise. As the race to develop sustainable metal-air batteries for energy storage accelerates, several companies and their researchers are busy investing in zinc-air and aluminum-air batteries.
[Related: Renewable energy needs storage. These 3 solutions can help.]
Materials scientists at the University of Münster in Germany have reworked the design of zinc-air batteries with a new electrolyte that consists of water-repellant ions. In traditional zinc batteries, the electrolytes can be caustic with a high pH substance, making them corrosive enough to damage the battery. The researchers overcame this issue by ensuring that the water-repellant ions stick to the air cathode, so that water from the electrolyte cannot react with incoming oxygen. The zinc ions from the anode can travel freely to the cathode, where they interact with atmospheric oxygen and generate power repeatedly.
As researchers are getting closer to developing rechargeable zinc-air batteries, a Canadian company, Zinc8 Energy, has already unveiled its product. The start-up uses zinc-air batteries with a storage tank that contains potassium hydroxide and charged zinc. Electricity from the grid splits chemical zincate into zinc, water, and oxygen. This charges zinc particles and stores electricity.
When the electricity needs to feed into the grid, the charged zinc syncs with oxygen and water, releasing the stored electricity and producing zincate. Following that, the entire process begins again. The group announced introducing these zinc-air flow batteries to the global market by installing their technology in a solar-paneled residential building in Queens, New York.
Like iron, zinc is widely available and has existing supply chains. Another metal that is also abundant, aluminum, is also being used to develop aluminum-air batteries. But unlike zinc-air batteries, aluminum-air batteries cannot recharge, says Chiang. The carbon footprint of aluminum production is also higher than other metal-air battery options.
By 2028, the global metal-air battery market is expected to reach $1,173 million, mainly for providing energy storage solutions. But for now, investors, industry analysts, and consumers alike are eagerly waiting for the next big breakthrough.
The Best Smart Home Devices 2023
Best smart speaker: Google Nest Mini£49 – View on Google
There are a number of smart speakers available, and they all do roughly the same thing. They’ll act as a virtual assistant, responding to your voice commands, queries and music requests and will also serve as a voice-controlled hub for your other smart gear.
The two biggest smart speakers right now are the Amazon Echo and the Nest Mini.
In spite of the name change, the Nest Mini is far from an overhaul of the Home Mini – although you can think of it as the Home Mini 2. The audio has improved, making it a better choice if you want to use it as a standalone speaker, while a machine learning chip makes basic commands quicker to use.
The design is essentially unchanged, aside from a new Sky colour, but there is one welcome tweak: a small indentation on the back for wall-mounting, letting you easily hang the Nest Mini from any nail or screw.
Don’t rush out to grab a Nest Mini to replace an existing Home Mini. But for anyone looking to add Google to more rooms, or get the Google Assistant into their home for the first time, the Nest Mini is the new best, and cheapest, way to do it.
Read our full Google Nest Mini review or find out more in our smart speaker comparison.
Best smart light: Philips Hue£151/$179.99 – View on Amazon
Smart lighting is one of the first things that comes to mind when people think of smart homes, and it’s easy to see why. Most modern smart lights let you set brightness, tone, and colour, control your bulbs with voice commands, and schedule them to turn on and off at certain times of day or when you enter or leave the house.
Philips Hue is one of the biggest names in smart lighting, and its starter kit is a great way to kick off – it includes three colour-changing bulbs along with the hub you need to control them. There are a variety of pre-set scenes, support for Google Home and Amazon Echo, and the ability to expand functionality even further with IFTTT. You can also buy individual bulbs to expand your set.
If Hue isn’t quite your thing, you can buy LIFX individual smart bulbs that work without any sort of hub, and we also love Nanoleaf’s interconnected Aurora light panels.
Check out some of the other options in our guide to the best smart lights.
Best smart thermostat: Hive Active Heating£149.99 – View on Amazon
Smart heating may not be especially exciting, but if you’re looking for a great use of tech to help the environment and save you money, you can’t do better. By giving you precise control of how and when you heat your house – and data on your energy usage – you can potentially make a serious dent in your monthly bills.
The Hive Active Heating system is our top pick. The recent addition of multi-zone support is very useful, and the ability to boost heating and hot water is a great feature.
This second iteration of Hive is a giant leap forward from the solid (but somewhat dull) first-generation product. The interfaces of both the app and thermostat are intuitive and quick to use. There are clearly energy savings which Hive can help you make and you can quickly recoup your expenditure if you’re conservative with your temperatures and schedule. And it’s easy to add Hive Active Light as well as smart plugs and switches, too.
Read our full Hive Active Heating 2 review
If you’re not sold on Hive, we’re also big fans of Nest and Honeywell Evohome, and you can find other alternatives in our dedicated smart heating round-up.
Best smart display: Google Nest Hub Max£219/$229 – View on Google
Smart displays are one of the newer categories of smart devices. Essentially smart speakers with inbuilt screens, these don’t quite have the full functionality of a dedicated tablet, but instead, are simplified devices that offer all the benefits of a virtual assistant with some extra visual aids.
Our current favourite is the Google Nest Hub Max, which is compact enough to fit into most rooms, and offers most of the key features you’d hope for. If you want a slightly smaller display, consider the original Nest Hub, while Alexa die-hards might prefer the Echo Show.
Check out our best smart display guide for more options.
Best smart kettle: iKettle£129.99/$129.99 – View on Amazon
If there’s a more beautiful vision of our connected future than being able to pop the kettle on from bed, we are too busy dreaming of our morning cuppa to have thought of it. And thanks to IFTTT and Alexa integration, you don’t even need to get your phone out to boil the iKettle; you just tell Alexa to turn it on.
Set-up is a bit of a pain and took us a few attempts – and IFTTT integration took a few more – but once you’re there you can set the iKettle to heat to whatever temperature you like, to stay warm once it gets there, and even to trigger to boil at set times or when you come home from work.
Smart kettles are a small but growing field. Expect to see a lot more of these over the next few years as the standard appliance manufacturers get in on the act. For more smart culinary appliances, see our pick of the best smart kitchen gadgets.
Best smart coffee machine: Nespresso Expert & Milk£299.99 – View on Amazon
If boiling the kettle from the bed is one half of the English dream, then surely brewing a coffee is the other half. Enter Nespresso’s app-connected Expert range.
This is essentially a standard Nespresso machine (admittedly a very fully-featured one) with the added benefit of being able to set a coffee to brew remotely, or on a schedule – so long as you’ve left a pod in the machine and a cup at the ready.
It’ll also let you know how many pods you have left, remind you to restock (and let you buy more within the app), and give you alerts when it’s time to clean or descale the machine. Handy.
If that’s a bit steep for you, you should check out the filter-style Smarter Coffee Machine, or check out our pick of the best coffee machines – including ones without any smart stuff.
Best smart plug: Tapo P100 Mini Smart Wi-Fi Socket£9.95 – View on Amazon
Smart plugs are an easy way to give a bit of an intellect boost to non-smart tech, letting you remotely switch mains power on or off, set it to a schedule, or trigger it to turn on or shut off based on your location.
Tapo is a new sub-brand from TP-Link and the P100 Mini Smart Wi-Fi Socket is one of the best smart plugs we’ve tested – and one of the best-priced.
The simple, compact design means you’ll be able to fit it into even cramped spots and we’ve found it quick and easy to get set up on the app. Once it’s ready to use, the app allows you to do the usual things like schedules and timers.
There’s even an ‘away mode’ so you can turn a light on and off randomly to make it look like you’re still at home. There’s also a button on the plug if you want to use it manually.
The Tapo P100 supports both Google Assistant and Alexa, so you can control it with your home assistant – with no hub required.
It’s worth noting that you can’t use it with the Kasa app, so it isn’t ideal if you already have TP-Link smart plugs or other smart devices. But, unless you’re already locked into a different system, there’s no reason to spend more on another smart plug.
There are plenty of other great alternatives, including good options from Belkin and Elgato. Take a look at our smart plug roundup for some inspiration.
Best robot vacuum: Eufy RoboVac 30C£199.99 – View on Amazon
As with any robot vacuum cleaner, the RoboVac 30C will not eliminate the need for a traditional vacuum every so often, but it has sufficient suction power (1500Pa) to keep your floors and carpets clean on the days you’d rather put your feet up and chill.
The ability to connect it to your home Wi-Fi network for remote operation is cool, although nothing new, but we love the voice assistant integration for proper lazy boy cleaning.
Read our full Eufy RoboVac 30C review. The Neato Botvac D7 Connected is another great option if you have a bit more to spend. Check out our guide to the best robot vacuum cleaners for more ideas.
Best smart security camera: Netatmo Presence£249.99/$299.99 – View on Amazon
Security is one of the most obvious areas where smart tech offers a real benefit, giving you access to live feeds from cameras, cloud storage of footage, and even the potential for smart locks to govern who gets in and out (as long as you trust your chosen company’s cyber-security, of course…).
Hive, Ezviz and security camera buying guide.
Best smart scale: Withings Body Cardio£149.95/$179.95 – View on Amazon
Scales might seem like unlikely items to have smart capabilities but it makes sense when you begin to think of them in terms of data. A good smart scale will track more than just weight – it’ll include bodyfat percentage, BMI, lean mass, water weight, and more – and it’ll retain all that data for you and make it easy to understand.
The Body Cardio from Withings is our favourite right now thanks to support for up to eight users, syncing with Withings’ other health devices, and the fact that it measures pulse wave velocity – a first for the smart scale market.
You’ll also find great scales from Qardio and Fitbit – check out our full smart scale guide for more.
Best smart toothbrush: Oral-B Genius X£339.99 – View on Amazon
First up, don’t wince too much at that price tag. It’s currently available on Amazon for £129, so there’s no need to pay full price. What you’ll get is a brush that combines all the features of a high-end electric toothbrush with an app to show where you’re brushing in real-time, track your brushing habit, and keep your dental health on track.
Sure, this is one of those examples where smart functionality isn’t necessary, but it can help you to improve your brushing technique and remind you to brush for the recommended time, twice a day.
Philips Sonicare is the main alternative to Oral-B, but you can see other options in our best smart toothbrush comparison.
Update the detailed information about Ioen: Solving The Energy Crisis With Interconnected Devices on the Cancandonuts.com website. We hope the article's content will meet your needs, and we will regularly update the information to provide you with the fastest and most accurate information. Have a great day!