Trending March 2024 # Apple Debuts Advanced Data Protection To Bring End # Suggested April 2024 # Top 8 Popular

You are reading the article Apple Debuts Advanced Data Protection To Bring End updated in March 2024 on the website Cancandonuts.com. We hope that the information we have shared is helpful to you. If you find the content interesting and meaningful, please share it with your friends and continue to follow and support us for the latest updates. Suggested April 2024 Apple Debuts Advanced Data Protection To Bring End

Apple today has announced a dramatic expansion of end-to-end encryption for its various cloud services. Called Advanced Data Protection, this initiative expands end-to-end encryption to a number of additional iCloud services, including iCloud device backups, Messages backups, Photos, and much more.

iCloud already offered end-to-end encryption for 14 different data categories, including things like iCloud Keychain and Health data. Today’s expansion, however, brings the number of data categories protected by end-to-end encryption to 23. The new iCloud services and data types now protected by end-to-end encryption are:

Device Backups

Messages Backups

iCloud Drive

Notes

Photos

Reminders

Safari Bookmarks

Siri Shortcuts

Voice Memos

Wallet Passes

“iCloud encrypts your data to keep it secure,” Apple explains. “Advanced Data Protection uses end-to-end encryption to ensure that iCloud data types listed here can only be decrypted on your trusted devices, protecting your information even in the case of a data breach in the cloud.”

At launch, Advanced Data Protection will be opt-in only, meaning you have to go into the Settings app and navigate to the iCloud menu to enable the feature. While the idea is that the feature will presumably eventually expand to be enabled for everyone, it’s still early in the implementation.

If you enable Advanced Data Protection, it means that no one will hold the keys to decrypt this data, including Apple. The only way to access the data is through one of your trusted Apple devices like your iPhone, iPad, or Mac.

This means that if you lose access to your devices, you will only be able to regain access using a recovery key or recovery contact. Because of this, if you enable the Advanced Data Protection feature, you’ll be guided through the process of setting up at least one recovery contact or recovery key before the feature is turned on.

Ivan Krstić, Apple’s head of security engineering and architecture, explained that this is Apple’s “highest level of cloud data security.”

“Advanced Data Protection is Apple’s highest level of cloud data security, giving users the choice to protect the vast majority of their most sensitive iCloud data with end-to-end encryption so that it can only be decrypted on their trusted devices.”

Advanced Data Protection is launching today in the latest iOS 16.2 beta. It will be available to everyone in the United States by the end of this year, with an expansion to the rest of the world slated for early 2023.

9to5Mac’s Take

This marks a huge upgrade to Apple’s cloud services in terms of encryption. In particular, the lack of end-to-end encryption for Messages in the cloud and device backups has been one of the most common complaints among users. While iMessage as a service has been end-to-end encrypted since the beginning, the loophole in the chain was that the iCloud backups and Messages backups were not end-to-end encrypted.

As Apple explains:

Messages in iCloud is end-to-end encrypted when iCloud Backup is disabled. When iCloud Backup is enabled, your backup includes a copy of the Messages in iCloud encryption key to help you recover your data. If you turn off iCloud Backup, a new key is generated on your device to protect future Messages in iCloud. This key is end-to-end encrypted between your devices and isnʼt stored by Apple.

But with the new Advanced Data Protection feature enabled, Messages in iCloud is “always end-to-end encrypted.” So when iCloud Backup is enabled, “everything inside it is end-to-end encrypted, including the Messages in iCloud encryption key.”

With this expansion, there are only three major iCloud data categories not covered by end-to-end encryption: iCloud Mail, Contacts, and Calendar. Apple says this because of these services needing to rely on protocols that “interoperate with the global email, contacts, and calendar systems.”

More technical details are available in Apple’s Platform Security guide. There’s also a new support document with more details and an overview of end-to-end encryption for each service.

FTC: We use income earning auto affiliate links. More.

You're reading Apple Debuts Advanced Data Protection To Bring End

Apple Invents Advanced Presence Detection System With Intelligent Zooming And More

The U.S. Patent and Trademark Office published a patent application from Apple today that covers various methods of detecting a user’s presence and augmenting the user experience accordingly. Apple has covered face recognition and presence detection systems in various patent applications in the past, notably for multi-user logins, security features, and an Android-like face unlock feature. Today’s patent application covers even more implementations of Apple’s presence detection technology that would utilize ultrasonic sensors, microwave radar, and camera and audio systems to detect and identify the user. PatentlyApple covered the highlights of the patent including the ability to activate or augment features using presence detection:

In some embodiments, the device may also be configured to track the user movements (e.g., position and velocity) and, in response to certain movements, provide feedback and/or enter or change a state of operation. For example, movement toward the device may activate more features, such as providing more options/menus in a user interface, whereas movement away from the device may reduce the number of features available to a user, such as reducing the number of menus/options and/or reducing or increasing the size of the options displayed.

PatentlyApple also described another interesting possible implementation that would allow for intelligent zooming based on the movement of the user:

Additionally or alternatively, the display may zoom in or zoom out based on movement towards or away from the device. In some embodiments, a lateral movement of by the user (e.g., from left to right) may cause a change in a background and/or a screen saver image displayed on the device. Still further, the changing of the image may correspond generally with the sensed motion. For example, the movement from left to right may cause the image to be replaced in a left to right motion with another image. Alternatively, as a user moves from left to right, leaves or drapes may reflect the movement. That is, the leaves may be blown and tumble from left to right, or the drapes may sway in a manner corresponding to the detected movement.

Beyond simply detecting movement of the user, the report described the ability to detect a user’s heartbeat, skin tone, and more:

Some of these technologies may be utilized to determine physiological parameters when a human is in proximity to the device. For example, RADAR may be used to detect and/or locate a heartbeat in the room… In some embodiments, active IR may user multiple specific IR wavelengths to detect certain unique material properties, such as reflectivity of human skin or carbon dioxide emissions in breath. As such, the particular embodiments described herein are merely presented as examples and are not limiting.

…The method includes capturing an image using an image sensor and computing at least one of the following from the captured image: a skin tone detection parameter, a face detection parameter, a body detection parameter and a movement detection parameter. The method also includes utilizing at least one of the skin tone detection parameter, face detection parameter and the movement detection parameter to make a determination as to whether a user is present and, if it is determined that a user is present, changing a state of the computing device.

You can get all the details of the patent from PatentlyApple.

FTC: We use income earning auto affiliate links. More.

Remix Os To Bring Desktop

Remix OS to bring desktop-like Android to PCs, Macs on Jan. 12

You’re probably not familiar with the Chinese startup named Jide, but if you’re anything of an Android fan, you better take note. The company is close to doing what many have tried but few have succeeded: giving Android a more desktop-friendly experiencece. FIrst there was the Jide Ultra tablet “Surface clone”. Then came the Remix Mini mini PC. Starting 12th January, Jide will be embracing everyone and their hardware by releasing Remix OS 2.0 for PCs and Macs, or any Intel or AMD device basically. And for free!

What is Remix OS 2.0 anyway? Disregarding the version number, this is, as its name says, a remix. Of Android, that is. It has taken the Android platform and transformed it, even shoehorned if you may, into a user experience not that different from a desktop. Floating, resizable windows? Check. Bottom panel with “start menu” and notification tray? Check. Real multi-tasking? Check. It’s far from perfect, at least not yet, but so far it is the only one that gets this close to that dream.

Remix OS was initially designed for Jide’s two devices, both of which are powered by ARM chips. But an “Android PC” experience is probably best experienced on a PC as well. Or in some cases, a Mac too. With the upcoming Remix OS 2.0 release, anyone can download the image and install it on any device running on an Intel or AMD processor. Or, they can even install it on a USB and take it with them anywhere like a portable OS on a stick. Simply plug it into a computer or laptop and boot into the USB to get your Android PC running.

Interestingly, for this particular campaign, Jide isn’t doing it alone. It has partnered with Android-x86, the community project whose goal is make Android run on any x86, meaning Intel architecture (which includes AMD), machine. Android-x86 has been doing this for years now but, when it comes to the interface, it has stuck with the default Android UI, which is great for x86 phones and tablets, but not so much for laptops and desktops. Now Jide is remixing that into a better shape as well.

It is also a bit curious that Android-x86 would take this path. Just last month, it was involved in a bit of Internet controversy over Console OS, another crowdfunded project that also sought to bring Android to the desktop. Android-x86 basically accused Console OS of simply “stealing” its code and work, presenting it as its own, and then trying to make a buck out of it. The back and forth mudslinging has, so far, remained unresolved.

Controversy aside, Jide’s Remix OS does exist, works, is already in the hands of owners of Jide’s devices, and, in less than a week, also be in the hands of anyone who has a PC, Mac, or USB stick.

SOURCE: Jide

VIA: XDA Developers

End To End Potato Leaf Disease Prediction Project – A Complete Guide

This article was published as a part of the Data Science Blogathon

In this article, We will develop an End to End project which will be based on Pure Deep Learning. I’m using Pure term here, the reason you’ll come to know in my further articles. The reason behind building this project is to detect or identify potato leaf diseases, having a variety of illnesses. Because our naked eyes can’t classify them, but Convolutional Neural Network can easily. You won’t believe it when I tell you that the error of some pre-trained Neural Network Architectures is approximately 3%, which is even less than the top 5% error of human vision. On large-scale images, the human top-5 error has been reported to be 5.1%, which is higher than pre-trained networks.

Problem Statement for Potato Leaf Disease Prediction

Farmers who grow potatoes suffer from serious financial standpoint losses each year which cause several diseases that affect potato plants. The diseases Early Blight and Late Blight are the most frequent. Early blight is caused by fungus and late blight is caused by specific micro-organisms and if farmers detect this disease early and apply appropriate treatment then it can save a lot of waste and prevent economical loss. The treatments for early blight and late blight are a little different so it’s important that you accurately identify what kind of disease is there in that potato plant. Behind the scene, we are going to use Convolutional Neural Network – Deep Learning to diagnose plant diseases.

Potato Leaf Disease Prediction Project Description

Here, we’ll develop an end-to-end Deep Learning project in the field of agriculture. We will create a simple Image Classification Model that will categorize Potato Leaf Disease using a simple and classic Convolutional Neural Network Architecture. We’ll start with collecting the data, then model building, and finally, we’ll use Streamlit to build a web-based application and deploy it on Heroku.

Let’s fire🔥 Potato Leaf Disease Prediction Data Collection

Any Data Science project start’s with the process of acquiring the data. First, we need to collect data. We have 3 options for collecting data first we can use readymade data we can either buy it from a third-party vendor or get it from Kaggle etc. The second option is we can have a team of Data Anatator whose job is to collect these images from farmers and annotate those images either healthy potato leaves or having early or late blight diseases. So this team of annotators works with farmers, go to the fields and they can ask farmers to take a photograph of leaves or they can take photographs themselves and they can classify them with the help of experts from agriculture field. So they can manually collect the data. But this process will be time-consuming. The third option is writing a web-scraping script to go through different websites which has potato images and collect those images and use different tools to annotate the data. In this project, I am using readymade data that I got from Kaggle.

Potato Leaf Disease Prediction Data Loading

Our dataset must be in the following format.

In this project, we are going to use only 900 images to train our model and 300 images for validation. As we all know, training a deep learning model requires a lot of data. To overcome this problem we will use one of the simple and effective methods, called Data Augmentation. Let’s first see what is data augmentation.

Data Augmentation: Data Augmentation is a process that generates several realistic variants of each training sample, to artificially expand the size of the training dataset. This aids in the reduction of overfitting. In data augmentation, we will slightly shift, rotate, and resize each image in the training set by different percentages, and then add all of the resulting photos to the training set. This allows the model to be more forgiving of changes in the object’s orientation, position, and size in the image. The contrast and lighting settings of the photographs can be changed. The images can be flipped horizontally and vertically. We may expand the size of our training set by merging all of the modifications.

Let’s begin the coding section Installing the necessary libraries. import numpy as np import matplotlib.pyplot as plt import glob import cv2 import os import matplotlib.image as mpimg import random from sklearn import preprocessing import tensorflow.keras as keras import tensorflow as tf from keras.preprocessing.image import ImageDataGenerator from tensorflow.keras.utils import to_categorical

Let’s now add some static variables to aid us in our progress.

SIZE = 256 SEED_TRAINING = 121 SEED_TESTING = 197 SEED_VALIDATION = 164 CHANNELS = 3 n_classes = 3 EPOCHS = 50 BATCH_SIZE = 16 input_shape = (SIZE, SIZE, CHANNELS)

To begin, we must first establish the setup for augmentation that we will use on our training dataset.

train_datagen = ImageDataGenerator( rescale = 1./255, rotation_range = 30, shear_range = 0.2, zoom_range = 0.2, width_shift_range=0.05, height_shift_range=0.05, horizontal_flip = True, fill_mode = 'nearest')

Here is one thing we need to care that, on validation and test dataset we will not use the same augmentation that we have used on the training dataset, because the validation and testing dataset will only test the performance of our model, and based on it, our model parameters or weights will get tunned. Our objective is to create a generalized and robust model, which we can achieve by training our model on a very large amount of dataset. That’s why here we are only applying data augmentation on the training dataset and artificially increasing the size of the training dataset.

validation_datagen = ImageDataGenerator(rescale=1./255) test_datagen = ImageDataGenerator(rescale = 1./255)

Let’s now load the training, testing, and validation datasets from the directory and perform data augmentation on them.

Note that our data must be in the above-mentioned format. Otherwise, we might be able to get an error or our model’s performance may suffer. It will load our dataset from the directory first, then resize all of the images to the same dimension, make batches, and then choose RGB as the color mode.

train_generator = train_datagen.flow_from_directory( directory = '/content/Potato/Train/', # this is the input directory target_size = (256, 256), # all images will be resized to 64x64 batch_size = BATCH_SIZE, class_mode = 'categorical', color_mode="rgb") validation_generator = validation_datagen.flow_from_directory( '/content/Potato/Valid/', target_size = (256, 256), batch_size = BATCH_SIZE, class_mode='categorical', color_mode="rgb") test_generator = test_datagen.flow_from_directory( '/content/Potato/Test/', target_size = (256, 256), batch_size = BATCH_SIZE, class_mode = 'categorical', color_mode = "rgb" )

Let’s build a simple and classical Convolutional Neural Network Architecture now.

Our dataset is preprocessed and now we are ready to build our model. We are going to use a Convolutional Neural Network which is one of the famous types of Neural Network Architecture if you are solving Image Classification Problems. Here we are creating simple and classical Neural Network Architecture. Here we are using Keras Sequential API to create our model architecture, it only contains a stack of convolutional and pooling layers. There is approx n number of layers and then at the end, there is a dense layer where we just flatten our feature maps. In the end, we are using a dense layer with a softmax activation function, which will return the likelihood of each class.

model = keras.models.Sequential([ keras.layers.Conv2D(32, (3,3), activation = 'relu', input_shape = input_shape), keras.layers.MaxPooling2D((2, 2)), keras.layers.Dropout(0.5), keras.layers.Conv2D(64, (3,3), activation = 'relu', padding = 'same'), keras.layers.MaxPooling2D((2,2)), keras.layers.Dropout(0.5), keras.layers.Conv2D(64, (3,3), activation = 'relu', padding = 'same'), keras.layers.MaxPooling2D((2,2)), keras.layers.Conv2D(64, (3,3), activation = 'relu', padding = 'same'), keras.layers.MaxPooling2D((2,2)), keras.layers.Conv2D(64, (3,3), activation = 'relu', padding = 'same'), keras.layers.MaxPooling2D((2,2)), keras.layers.Conv2D(64, (3,3), activation = 'relu', padding = 'same'), keras.layers.MaxPooling2D((2,2)), keras.layers.Flatten(), keras.layers.Dense(32, activation ='relu'), keras.layers.Dense(n_classes, activation='softmax') ])

I hope you are familiar that how the convolutional and Max pooling is working behind the scene. This architecture is hit & trial, you can try your architectures. You can experiment with different architecture by removing a few layers, adding more layers, and using dropouts. Now our model architecture is ready.

The next step is to investigate model architecture.

Let’s have a look at the brief summary of our model. We have a total of 185,667 trainable parameters. These are the weights we’ll be working with.

model.summary() Model: "sequential" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= conv2d (Conv2D) (None, 254, 254, 32) 896 max_pooling2d (MaxPooling2D (None, 127, 127, 32) 0 ) dropout (Dropout) (None, 127, 127, 32) 0 conv2d_1 (Conv2D) (None, 127, 127, 64) 18496 max_pooling2d_1 (MaxPooling (None, 63, 63, 64) 0 2D) dropout_1 (Dropout) (None, 63, 63, 64) 0 conv2d_2 (Conv2D) (None, 63, 63, 64) 36928 max_pooling2d_2 (MaxPooling (None, 31, 31, 64) 0 2D) conv2d_3 (Conv2D) (None, 31, 31, 64) 36928 max_pooling2d_3 (MaxPooling (None, 15, 15, 64) 0 2D) conv2d_4 (Conv2D) (None, 15, 15, 64) 36928 max_pooling2d_4 (MaxPooling (None, 7, 7, 64) 0 2D) conv2d_5 (Conv2D) (None, 7, 7, 64) 36928 max_pooling2d_5 (MaxPooling (None, 3, 3, 64) 0 2D) flatten (Flatten) (None, 576) 0 dense (Dense) (None, 32) 18464 dense_1 (Dense) (None, 3) 99 ================================================================= Total params: 185,667 Trainable params: 185,667 Non-trainable params: 0 _________________________________________________________________ Compile the Potato Leaf Disease Prediction model

optimizer = ‘adam’, loss = tf.keras.losses.CategoricalCrossentropy(), metrics = [‘accuracy’] )

We’re using Adam Optimizer, which is one of the most common optimizers, but you can also check out other optimizers. We’re using categorical cross-entropy in the loss because we’re dealing with a Multi-Class Classification problem. We’re using accuracy measures to track our model’s training performance

Train the network history = model.fit_generator( train_generator, steps_per_epoch = train_generator.n epochs = EPOCHS, validation_data = validation_generator, validation_steps = validation_generator.n )

In the history parameter, we’re recording the history of each epoch so that we can create some charts to compare our model’s performance on the training and validation sets.

Let’s see how well the model performs on the test data.

Let’s test our model on the test dataset. Before putting our model into production, we must first test it on a test dataset to see how it performs.

score = model.evaluate_generator(test_generator)

print(‘Test loss : ‘, score[0])

print(‘Test accuracy : ‘, score[1])

————–OUTPUT—————–

Test Loss : 0.10339429974555969

Test Accuracy : 0.9733333587646484

On the test set, we have a 97% accuracy rate, which is rather good. Let’s start with some performance graphs.

Let’s have a look at the history parameter.

Actually, history is a Keras callback that keeps all epoch history as a list; let’s utilize it to plot some intriguing plots. Let’s start by putting all of these parameters into variables.

acc = history.history['accuracy'] val_acc = history.history['val_accuracy'] loss = history.history['loss'] val_loss = history.history['val_loss'] plt.figure(figsize=(8, 8)) plt.subplot(1, 2, 1) plt.plot(range(EPOCHS), acc, label='Training Accuracy') plt.plot(range(EPOCHS), val_acc, label='Validation Accuracy') plt.legend(loc='lower right') plt.title('Training and Validation Accuracy') plt.subplot(1, 2, 2) plt.plot(range(EPOCHS), loss, label='Training Loss') plt.plot(range(EPOCHS), val_loss, label='Validation Loss') plt.legend(loc='upper right') plt.title('Training and Validation Loss') plt.show()

This graph shows the accuracy of training vs validation. Epochs are on the x-axis, and accuracy and loss are on the y-axis.

Let's save our model  # it will save the model model.save('final_model.h5')

Our model will be saved in HDF5 format since we need to save all learned parameters in deep learning, which might take up a lot of space, and HDF5 can easily hold a lot of data.

Streamlit – The Boom!

If you wanted to build a Machine Learning web application in the past, you had to use flask, Django, or hire a full stack developer, but when streamlit came along, it changed the entire ecosystem, allowing anyone to build a Machine Learning web application, Data Science web application, Data Science dashboard, or Data Analytics dashboard. Streamlit is a python library and a startup that allows you to construct web applications for free. It also offers Streamlit Cloud Services, where we can deploy our apps at different prices.

Streamlit is a free, open-source Python Framework, that allows us to quickly develop a Web Application without the requirement of a backend server and without having to write HTML, CSS, or Javascript. We can start building a really good Web Application simply by using our existing python skills. I’ve created a simple web application that accepts images as input and requires the same preprocessing steps on the input image as we did on our training dataset during training because when we save our model, it only saves model trained parameters, and we must preprocess our input manually, so this is something we must keep in mind when building any web application or using a pre-trained model.

Web App

# For potato leaf disease prediction import streamlit as st from PIL import Image import numpy as np import tensorflow.keras as keras import matplotlib.pyplot as plt import tensorflow_hub as hub hide_streamlit_style = """ #MainMenu {visibility: hidden;} footer {visibility: hidden;} """ st.markdown(hide_streamlit_style, unsafe_allow_html = True) st.title('Potato Leaf Disease Prediction') def main() : file_uploaded = st.file_uploader('Choose an image...', type = 'jpg') if file_uploaded is not None : image = Image.open(file_uploaded) st.write("Uploaded Image.") figure = plt.figure() plt.imshow(image) plt.axis('off') st.pyplot(figure) result, confidence = predict_class(image) st.write('Prediction : {}'.format(result)) st.write('Confidence : {}%'.format(confidence)) def predict_class(image) : with st.spinner('Loading Model...'): classifier_model = keras.models.load_model(r'final_model.h5', compile = False) shape = ((256,256,3)) model = keras.Sequential([hub.KerasLayer(classifier_model, input_shape = shape)]) # ye bhi kaam kar raha he test_image = image.resize((256, 256)) test_image = keras.preprocessing.image.img_to_array(test_image) test_image /= 255.0 test_image = np.expand_dims(test_image, axis = 0) class_name = ['Potato__Early_blight', 'Potato__Late_blight', 'Potato__healthy'] prediction = model.predict(test_image) confidence = round(100 * (np.max(prediction[0])), 2) final_pred = class_name[np.argmax(prediction)] return final_pred, confidence footer = """ a:link , a:visited{ color: white; background-color: transparent; text-decoration: None; } a:hover, a:active { color: red; background-color: transparent; text-decoration: None; } .footer { position: fixed; left: 0; bottom: 0; width: 100%; background-color: transparent; color: black; text-align: center; }

“”” st.markdown(footer, unsafe_allow_html = True) if __name__ == ‘__main__’ : main()

Output : 

Internally, the web app uses our previously developed deep learning model to detect potato leaf diseases. Let’s go on to the next phase now that you have a better understanding of what’s going on. I’m going to launch it to Heroku, so you’ll need to sign up for Heroku first, then follow the procedures below.

1. Make a GitHub repository and add your model, web application python file, and model building source code to it.

2. Once you’ve completed that, create a chúng tôi file in which you’ll list any libraries, packages, or modules that you’ll be utilizing in this project.

3. Create a Procfile, fill it with the code below, and save it to your project’s GitHub repository.

web: sh chúng tôi && streamlit run your_webapp_name.py

4. Create a chúng tôi file and paste the code below into it.

mkdir -p ~/.streamlit/ echo " [general]n email = "mention_your_mailid_here"n echo " [server]n headless = truen enableCORS=falsen port = $PORTn

5. After you’ve completed all of the procedures, go to Heroku and build a new app, then select Connect to GitHub.

If you run into any problems during deployment, go to my GitHub repository and utilize the same file format that I did. To see my GitHub repository, go here.

Stay Classy & Thanks for reading!

The media shown in this article is not owned by Analytics Vidhya and are used at the Author’s discretion.

Related

Apple Pay May Mean The End Of Physical Bank Cards Within 2

While we’d all expected plastic bank cards to be replaced by apps eventually, the CEO of mobile banking startup Moven is suggesting that Apple’s backing could mean the end of physical bank cards within 2-3 years.

The additional sweetners here are three fold. Firstly, tokenization will avoid much of the type of breaches we’ve seen at Target and Home Depot because the token is only a one-time use thing. Secondly, the move to tokens and the combination of biometrics, etc allow for the emergence of a ‘cardholder present’ approach to interchange rates that will potentially give mobile payments a competitive merchant rate. Lastly, the US might effectively jump straight from magstripe to mobile, especially if issuers can figure out how to reduce the cost of card replacement by moving straight to mobile SE and tokens …

The U.S. has lagged behind Europe and Asia in switching from magnetic stripe cards to ones with embedded chips, for two main reasons, says Moven’s Brett King. First, uncertainty over standards, with some suggesting that Bluetooth LE/iBeacon might render NFC obsolete, leading to banks entering wait-and-see mode. Apple’s choice of NFC for Apple Pay means “that debate is over.”

Second, banks have been deterred by the cost of the switch. The cards themselves are more expensive to make, as are the more sophisticated payment terminals needed. If banks moved straight to mobile, however, the costs of EMV cards could be eliminated.

Apple Pay is supported by the iPhone 6 and iPhone 6 Plus, with the Apple Watch likely allowing mobile payment via older iPhones. Higher-end Android handsets also have NFC support, allowing them to be used for mobile payment with suitable apps.

If the idea of abandoning physical cards so quickly seems ambitious, King points out that the MINT markets – Mexico, Indonesia, Nigeria, Turkey – are already planning to move directly to mobile for new account holders, and that Apple has removed the final barrier to mature markets following suit.

A likely model for adoption rate for mobile payments in developed markets is 2-3 years, based on what we’ve seen around Kindle, iTunes, App and Smartphone adoption over the last 7-8 years.

Apple may have taken its time in adopting NFC, but just as it did with smartphones and tablets, the company looks set to move mobile payment into the mainstream.

The move may not be without its regulatory hurdles, however. A piece in The Hill suggests that Apple may have effectively become a bank.

By moving into the mobile payment space, Apple might soon find itself subjected to new oversight from federal regulators.

“Rules that apply to plastic card payments also apply to payments with a phone,” said Moira Vahey, a spokeswoman for the Consumer Financial Protection Bureau (CFPB).

Jason Oxman, CEO of the Electronic Transactions Association, the trade group for payment companies, said that use of Apple Pay may also create new privacy issues.

“It was never a question for regulators or legislators: ‘Are you going to be using location-based technology for your credit card,’” he said. “Because your plastic credit card doesn’t know where you are. But your phone does.”

Apple is on record as stating that Apple Pay does not store card numbers (using a separate unique number to identify the card), nor log transactions.

Via Finextra. Photo credit: AP.

FTC: We use income earning auto affiliate links. More.

Terriers Looking To End Beanpot Drought

Terriers Looking to End Beanpot Drought Men’s, women’s tournaments kick off tonight

Evan Rodrigues (SMG’15), one of two men’s hockey assistant captains, hopes to help the Terriers capture their first Beanpot title since 2009. Photo by Jim Pierce

The beginning of February heralds the arrival of one of Boston’s most storied sporting events: the Beanpot Tournament. Since 1952, the BU, BC, Harvard, and Northeastern men’s varsity hockey teams have faced off for a chance to win bragging rights as the best team in town. The women’s Beanpot tradition, not quite so venerable, dates back to 1979.

Men’s Ice Hockey

The 63rd men’s Beanpot Tournament, postponed from last night because of Monday’s snowstorm, begins tonight at TD Garden, and with three of the four schools playing ranked in the top 20 nationally, it promises to be highly competitive.

The Terriers have won 29 Beanpots since the tournament began. When they take to the ice tonight, they intend to move one step closer to making it an even 30. The team hasn’t won a Beanpot title since 2009, when BU defeated Northeastern 5-2, and has had a humiliating three last-place finishes over the past four years.

But this year fate is on their side. BU enters the Beanpot with a 16-4-4 record (11-2-2 Hockey East) after a dismal showing last year, with only one win on the road. As a result of that rocky season, the Terriers adopted “Never Again” as their rallying cry this season. The team is currently ranked at the top in Hockey East.

“I think we’re just a good team all around with a lot of chemistry. We have a solid young class. It’s looking good so far, but we have a lot ahead of us,” says center Danny O’Regan (COM’16).

But two of the team’s four losses this season have come against Beanpot teams—Harvard and Boston College. The Terriers lost 3-2 in overtime to the Crimson just before Thanksgiving, and will be looking to avenge that loss tonight. They split a pair of games with Boston College earlier this season, beating their Comm Ave rival 5-3 in early November at Chestnut Hill and succumbing to them at Agganis Arena 4-2 January 16.

The team is not only aiming for an end to a six-year Beanpot title drought, but is counting on a victory giving them the momentum they need to enter the season’s home stretch.

Women’s Ice Hockey

A surprise victory against Harvard would propel the Terriers to the Beanpot final next Tuesday against either number one Boston College or Northeastern, currently unranked.

Despite being the underdog, the Terriers (17-6-2 overall, 12-4-0 Hockey East) are confident they can pull off a dramatic upset thanks to key players like two-time Olympic gold medal winner Marie-Philip Poulin (CAS’15) and forwards Sarah Lefort (CGS’14, SAR’16) and Kayla Tutino (COM’16). The three began playing on the same line last month, earning 13 goals, 16 assists, and 29 points among them in the seven games since.

Poulin has enjoyed a stellar season after taking last year off to train for, and compete in, the Sochi Winter Olympics last February. To date, she has scored 17 goals and 30 points in just 20 games. Lefort is second in goals, with 15, and Tutino is tied for team lead in assists, with 16. The Terriers have also received plenty of support from freshman Victoria Bach (CGS’16), who has tallied 14 goals.

For years, the women’s Beanpot was dominated by Harvard and Northeastern. Between them, the two schools have won 29 of 36 Beanpots. Since 2006, however, Boston College has claimed five titles.

The Terriers are determined that this is the season they’ll finally claim a Beanpot title.

“It’s a big tradition, there’s a lot of history, and it’s something we want to be a part of it,” Tutino says. “We want to see the women’s team engraved on the Beanpot trophy.”

“We’d like to break that jinx,” says head coach Brian Durocher (SED’78), “and see if we can put another date up on the banner next to the 1981 club team that won a long time ago,”

The BU men’s ice hockey team takes on Harvard in the first round of the 63rd annual men’s Beanpot Tournament tonight, Tuesday, February 3, at 5 p.m. at TD Garden, 100 Legends Way, Boston. Boston College and Northeastern play the second semifinal game beginning at 8 p.m. Ticket prices and information can be found here. The winners of each game will meet in the Beanpot final next Monday, February 9, at TD Garden at 7:30 p.m. The losers play in the consolation game at 4:30 p.m. The games will be televised on NESN.

The BU women’s ice hockey team goes up against the Harvard Crimson in the first round of the 37th annual women’s Beanpot Tournament tonight at 8 p.m. at Harvard’s Bright-Landry Hockey Center in Cambridge. Northeastern and Boston College play in the first semifinal game, at 5 p.m. Tickets are available at the Harvard online ticket office. The winners of each game will meet in the Beanpot final next Tuesday, February 10, at Harvard’s Bright-Landry Hockey Center at 7:30 p.m. The losers play in the consolation game at 4:30 p.m. The games will be streamed live on ESPN3.

Andre Khatchaturian can be reached at [email protected].

Explore Related Topics:

Update the detailed information about Apple Debuts Advanced Data Protection To Bring End on the Cancandonuts.com website. We hope the article's content will meet your needs, and we will regularly update the information to provide you with the fastest and most accurate information. Have a great day!