UFC
Home » Football » Bandirmaspor vs Esenler Erokspor

Bandirmaspor vs Esenler Erokspor

Expert Overview: Bandirmaspor vs Esenler Erokspor

This match between Bandirmaspor and Esenler Erokspor presents an intriguing set of betting odds, suggesting a high-scoring affair. With an average total of 3.65 goals expected, the match appears to be one where both teams are likely to find the back of the net multiple times. The odds for over 1.5 goals (90.50) and over 2.5 goals (53.20) indicate a strong likelihood of a goal-rich encounter. Additionally, the probability of both teams scoring (64.60) and over 1.5 goals in the first half (65.80) further supports the expectation of an open and attacking game.

Bandirmaspor

DDWWL-

Esenler Erokspor

DDWLWDate: 2025-10-26Time: 10:30
(FT)Venue: Not Available YetScore: 0-0

Predictions:

MarketPredictionOddResult
Over 1.5 Goals90.50%(0-0) 1.30
Over 0.5 Goals HT90.20%(0-0) 0-0 1H 1.40
Home Team To Score In 2nd Half78.90%(0-0)
Both Teams Not To Score In 2nd Half78.70%(0-0) 0-0 2H 1.30
First Goal Between Minute 0-2979.10%(0-0) 1.83
Away Team To Score In 2nd Half72.90%(0-0)
Away Team To Score In 1st Half73.70%(0-0)
Both Teams To Score64.60%(0-0) 1.80
Over 1.5 Goals HT65.80%(0-0) 0-0 1H 2.90
Home Team To Score In 1st Half59.60%(0-0)
Over 2.5 Goals53.20%(0-0) 2.00
Both Teams Not To Score In 1st Half53.00%(0-0) 0-0 1H 1.18
Last Goal 73+ Minutes57.40%(0-0) 1.83
Avg. Total Goals3.65%(0-0)
Avg. Goals Scored2.72%(0-0)
Avg. Conceded Goals2.52%(0-0)
Red Cards0.25%(0-0)

Prediction Analysis

  • Over 1.5 Goals: 90.50 – The high probability suggests that fans can expect at least three goals in this match, making it an exciting event for those who enjoy goal-heavy fixtures.
  • Over 0.5 Goals HT: 90.20 – This indicates that even by halftime, at least one goal is very likely, hinting at an early start to the scoring spree.
  • Home Team To Score In 2nd Half: 78.90 – Bandirmaspor seems poised to capitalize in the second half, potentially altering the course of the game after the break.
  • Both Teams Not To Score In 2nd Half: 78.70 – While less likely, there’s still a significant chance that neither team will add to their tally after halftime, possibly due to fatigue or tactical adjustments.
  • First Goal Between Minute 0-29: 79.10 – Early goals are anticipated, setting the tone for an aggressive start from both sides.
  • Away Team To Score In 2nd Half: 72.90 – Esenler Erokspor has a strong chance of finding their scoring rhythm in the latter stages of the match.
  • Away Team To Score In 1st Half: 73.70 – The away side is also expected to contribute early on, potentially balancing the scoreline before halftime.
  • Both Teams To Score: 64.60 – A common outcome in this fixture, with both teams likely to breach each other’s defenses.
  • Over 1.5 Goals HT: 65.80 – More than two goals are expected by halftime, indicating an intense first half.
  • Home Team To Score In 1st Half: 59.60 – Bandirmaspor is predicted to make their mark early, giving them a potential lead going into the break.
  • Over 2.5 Goals: 53.20 – A very high-scoring game is anticipated, with more than three goals likely by full-time.
  • Both Teams Not To Score In 1st Half: 53.00 – Despite high expectations for goals, there’s a notable chance that neither team will score in the first half.
  • Last Goal 73+ Minutes: 57.40 – A late goal is probable, which could be decisive in determining the match outcome.

Additionajankunyk/traffic-signs/writeup.md
## Writeup Template

**You can use this file as a template for your writeup if you want to submit it as a markdown file, but feel free to use some other method and submit a pdf if you prefer.**

You’re reading it! And here I am, presenting a writeup of I completed project.

### Build a Traffic Sign Recognition Project

**Build a Traffic Sign Recognition Project**

The goals / steps of this project are the following:
* Load the data set (see below for links to the project data set)
* Explore, summarize and visualize the data set
* Design, train and test a model architecture
* Use the model to make predictions on new images
* Analyze the softmax probabilities of the new images
* Summarize the results with a written report

[//]: # (Image References)

[image1]: ./examples/image.png “Visualization”
[image2]: ./examples/visualization.png “Visualization”
[image3]: ./examples/convnet.png “Visualization”
[image4]: ./examples/preprocess.png “Preprocessing”
[image5]: ./examples/training.png “Training”
[image6]: ./examples/softmax.png “Softmax”

## Rubric Points
### Here I will consider the [rubric points](https://review.udacity.com/#!/rubrics/481/view) individually and describe how I addressed each point in my implementation.


### Writeup / README

#### My approach:

My approach was based on what we have seen during lectures as well as some other papers and projects.

For preprocessing I used `cv2` library which is part of OpenCV package.
For training I used `keras` library which is part of Tensorflow package.

I started by loading and visualizing the dataset.

Then I used `sklearn.utils.shuffle` function to shuffle dataset.

Then I split dataset into training set and validation set.

After that I started preparing dataset for training.
In this step I did several things:
– converted images from RGB color space to YUV color space
– applied histogram equalization only on Y channel
– converted images from YUV color space back to RGB color space
– normalized images using min-max normalization

I trained my model using Keras library.
My model consisted of convolutional layers with RELU activation function.
The model includes dropout layers in order to reduce overfitting.
The model includes fully connected layers with RELU activation function.
The last layer is fully connected layer with softmax activation function.

To train the model, I used Adam optimizer with default parameters.

### Data Set Summary & Exploration

#### My approach:

Firstly I loaded training dataset using `np.load()` function.
Then I computed number of samples per class using `np.unique()` function.
After that I printed number of samples per class as well as total number of samples.
Then I printed shape of dataset.

#### Results:

Here is summary statistics:

* The size of training set is **34799**
* The size of validation set is **4410**
* The size of test set is **12630**
* The shape of a traffic sign image is **(32,32,3)**
* The number of unique classes/labels in the data set is **43**

Here is visualization code:

python
# Plot histogram showing number of samples per class
plt.figure(figsize=(16,8))
plt.bar(classes_names.keys(), classes_names.values())
plt.xticks(rotation=90)
plt.show()

![alt text][image1]

### Design and Test a Model Architecture

#### My approach:

In order to prepare data for training my model I decided to convert all images from RGB color space into YUV color space because YUV color space separates luminance information from chrominance information and therefore it’s more robust against changes in lighting conditions.

After converting images into YUV color space I applied histogram equalization only on Y channel because it helps improve contrast in images.

After that I converted all images back into RGB color space because Keras doesn’t accept input images in any other color spaces except RGB.

Finally I normalized all images using min-max normalization.

For training my model I decided to use Keras library because it provides nice high-level API for defining neural network models as well as for training them.

My model consists of convolutional layers with RELU activation function followed by max pooling layers.
I added dropout layers after convolutional layers in order to reduce overfitting.
My model also includes fully connected layers with RELU activation function followed by dropout layers.
Finally my model includes fully connected layer with softmax activation function.

To train my model I used Adam optimizer with default parameters.

#### Model Architecture

Here is my final model architecture:

| Layer | Description |
|:———————:|:———————————————:|
| Input | Input image |
| Convolution | Convolutional layer with RELU activation function followed by max pooling layer with dropout layer |
| Convolution | Convolutional layer with RELU activation function followed by max pooling layer with dropout layer |
| Fully connected | Fully connected layer with RELU activation function followed by dropout layer |
| Fully connected | Fully connected layer with RELU activation function followed by dropout layer |
| Output | Fully connected layer with softmax activation function |

#### Training Results

Here are results after training my model:

Epochs=10
Loss=0.0746
Validation Loss=0.0657
Test Loss=0.0758
Test Accuracy=0.9604

![alt text][image5]

![alt text][image6]

#### My approach:

As you can see above accuracy on test set after training my model is very high (~96%).

I think main reason why this approach works so well is because after preprocessing we remove some unnecessary information such as brightness/color from input images which makes our task much easier.

Another reason why this approach works so well could be that we use convolutional layers which allow us to detect different features from input images no matter where they are located within image itself (i.e., top-left corner vs bottom-right corner).

### Test a Model on New Images

#### My approach:

In order to get new images for testing my model I went online and searched for traffic signs similar to those used during training process but slightly different ones so that they would not be recognized by classifier easily just based on their shapes alone without having any additional context information available at all times during prediction process itself too!

I downloaded five traffic signs from [here](http://www.gtsdb-info.org/) website:
– Speed limit (30km/h)
– Speed limit (60km/h)
– Bumpy road
– Slippery road
– Dangerous curve right

Here are example images:

![alt text][image7]
![alt text][image8]
![alt text][image9]
![alt text][image10]
![alt text][image11]

#### Results:

Here are results after testing my model on new traffic signs:

python
# Plot softmax probabilities for new images

def plot_softmax(softmax):
fig = plt.figure(figsize=(12,4))
plt.subplot(1,2,1)
plt.bar(np.arange(43), softmax)
plt.xlabel(‘Class’)
plt.ylabel(‘Probability’)
plt.title(‘Softmax Probabilities’)
plt.show()

def plot_image(image):
fig = plt.figure(figsize=(6,6))
plt.imshow(image)
plt.show()

for image_path in new_images:
image = cv2.imread(image_path)
image = cv2.cvtColor(image,cv2.COLOR_BGR2RGB)
plot_image(image)

# Preprocess image before passing it through network

image = cv2.cvtColor(image,cv2.COLOR_RGB2YUV)
image[:,:,0] = cv2.equalizeHist(image[:,:,0])
image = cv2.cvtColor(image,cv2.COLOR_YUV2RGB)

# Min-max normalization

image = np.array(image,dtype=np.float32)/255

# Predict class

prediction = sess.run(tf.argmax(y_pred_softmax,axis=1),feed_dict={x:image.reshape(-1,image.shape[0],image.shape[1],3)})

# Get softmax probabilities

softmax = sess.run(y_pred_softmax,feed_dict={x:image.reshape(-1,image.shape[0],image.shape[1],3)})

# Print results

print(‘Predicted class:’,prediction[0])
print(‘Real class:’,classes_names[np.where(images_paths==image_path)[0][0]])

plot_softmax(softmax[0])

Results:

python
# Predictions on new images

# Speed limit (30km/h)

Predicted class: [13]
Real class: [13]

# Speed limit (60km/h)

Predicted class: [4]
Real class: [4]

# Bumpy road

Predicted class: [25]
Real class: [25]

# Slippery road

Predicted class: [33]
Real class: [33]

# Dangerous curve right

Predicted class: [35]
Real class: [35]

![alt text][image7]

![alt text][image8]

![alt text][image9]

![alt text][image10]

![alt text][image11]

As you can see above all predictions were correct!

#### Discussion:

One possible reason why my model was able correctly classify all five new traffic signs could be that they were taken under similar conditions as those used during training process itself so that there wasn’t any drastic change between them which would have made it harder for classifier itself since it would have had less context information available at all times during prediction process itself too!

Another reason why my model was able correctly classify all five new traffic signs could be because preprocessing steps helped remove some unnecessary information such as brightness/color from input images which made our task much easier since now we only need focus on shapes themselves without worrying about other stuff like lighting conditions etc…

### Conclusion

In conclusion we can say that our approach worked pretty well since accuracy on test set after training our model was very high (~96%)! We believe main reason why this approach works so well could be because after preprocessing we remove some unnecessary information such as brightness/color from input images which makes our task much easier since now we only need focus on shapes themselves without worrying about other stuff like lighting conditions etc…

Another reason why this approach works so well could be that we use convolutional layers which allow us detect different features from input images no matter where they are located within image itself (i.e., top-left corner vs bottom-right corner).

Finally we believe one possible way how someone else might improve performance even further would be trying out different types/architectures/models instead just sticking only one single type/architecture/model throughout whole project since there might exist better ones out there capable achieving even higher accuracies than what we’ve achieved here today!jankunyk/traffic-signs<|file_sep[//]: # (Image References)

[image1]: ./examples/image.png "Visualization"
[image2]: ./examples/visualization.png "Visualization"
[image3]: ./examples/convnet.png "Visualization"
[image4]: ./examples/preprocess.png "Preprocessing"
[image5]: ./examples/training.png "Training"
[image6]: ./examples/softmax.png "Softmax"

## Writeup Template

**You can use this file as a template for your writeup if you want to submit it as a markdown file, but feel free to use some other method and submit a pdf if you prefer.**

You're reading it! And here I am, presenting a writeup of I completed project.

### Build a Traffic Sign Recognition Project

**Build a Traffic Sign Recognition Project**

The goals / steps of this project are the following:
* Load the data set (see below for links to the project data set)
* Explore, summarize and visualize the data set
* Design, train and test a model architecture
* Use the model to make predictions on new images
* Analyze the softmax probabilities of the new images
* Summarize the results with a written report

[//]: # (Image References)

[image7]: ./test_images/speed_limit_30.jpg "Speed limit (30km/h)"
[image8]: ./test_images/speed_limit_60.jpg "Speed limit (60km/h)"
[image9]: ./test_images/bumpy_road.jpg "Bumpy road"
[image10]: ./test_images/slippery_road.jpg "Slippery road"
[image11]: ./test_images/dangerous_curve_right.jpg "Dangerous curve right"

## Rubric Points
### Here I will consider the [rubric points](https://review.udacity.com/#!/rubrics/481/view) individually and describe how I addressed each point in my implementation.


### Writeup / README

#### My approach:

My approach was based on what we have seen during lectures as well as some other papers and projects.

For preprocessing I used `cv2` library which is part of OpenCV package.
For training I used `keras` library which is part of Tensorflow package.

I started by loading and visualizing dataset.

python
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline

from sklearn.utils import shuffle

from keras.models import Sequential
from keras.layers import Dense
from keras.layers import Dropout
from keras.layers import Flatten
from keras.layers.convolutional import Convolution2D
from keras.layers.pooling import MaxPooling2D

import tensorflow as tf
import cv2

import pickle

# Load pickled data

training_file = './train.p'
validation_file= './valid.p'
testing_file = './test.p'

with open(training_file,'rb') as f:
train = pickle.load(f)

with open(validation_file,'rb') as f:
valid = pickle.load(f)

with open(testing_file,'rb') as f:
test = pickle.load(f)

X_train_orig = train['features']
y_train_orig = train['labels']
X_valid_orig = valid['features']
y_valid_orig = valid['labels']
X_test_orig = test['features']
y_test_orig = test['labels']

print('Training examples:',len(X_train_orig))
print('Validation examples:',len(X_valid_orig))
print('Testing examples:',len(X_test_orig))
print('Shape:',X_train_orig.shape,y_train_orig.shape)

classes_names=np.unique(y_train_orig)
classes_names_counts={}

for i in classes_names:
classes_names_counts[i]=len(np.where(y_train_orig==i)[0])

print(classes_names_counts)

plt.figure(figsize=(16,8))
plt.bar(classes_names_counts.keys(),classes_names_counts.values())
plt.xticks(rotation=90)
plt.show()

Training examples: **34799**

Validation examples: **4410**

Testing examples: **12630**

Shape: **(34799L,32L,32L,**3L)** (**34799L**,)

{0L : **1805L**,
1L : **1983L**,

42L : **2039L**}

![png](output_3_3.png)

As you can see above we have ~35000 examples in our training dataset,
~4500 examples in validation dataset,
and ~13000 examples in testing dataset.

### Data Set Summary & Exploration

#### My approach:

Firstly I loaded training dataset using `