. Introduction Cryptography has played a crucial role in securing information from unauthorized access for centuries. The Bacon Cipher, developed by Sir Francis Bacon in the 16th century, is a classical steganographic cipher that hides messages in the form of binary sequences. Despite its historical significance, the cipher remains an interesting subject for cryptanalysis, especially with the advent of modern computational techniques. This project focuses on decrypting the Bacon Cipher using advanced metaheuristic algorithms, leveraging their ability to explore large search spaces efficiently. We propose utilizing Genetic Algorithm (GA), Particle Swarm Optimization (PSO), Cuckoo Search (CS), Grey Wolf Optimizer (GWO), and Red Deer Algorithm (RDA) to optimize the decryption process. By comparing the performance of these algorithms in breaking encoded messages, we aim to identify the most effective approach based on decryption accuracy, speed, and computational efficiency 2. Objectives The primary objectives of this study are: • To design and implement cryptanalysis techniques for Bacon Cipher using GA, PSO, CS, GWO, and RDA. • To assess and compare the effectiveness of these metaheuristic techniques in decrypting encoded messages. • To evaluate performance based on decryption accuracy, convergence speed, and computational complexity. • To establish an optimal heuristic framework for classical cipher decryption. 3. Methodology 3.1 Problem Representation The Bacon Cipher encodes plaintext letters into a binary format, typically by utilizing two distinct symbols (e.g., variations in font style, uppercase/lowercase letters, or differing text formats). The main challenge lies in identifying the correct mappings and deciphering the message efficiently. To solve this problem, we define an adaptive fitness function that evaluates potential decryption keys based on: • Linguistic frequency analysis (checking the probability of letter occurrences). • N-gram analysis (ensuring decrypted text follows common word structures). • Dictionary-based validation (matching outputs against known words). Using statistical properties of natural language, metaheuristic algorithms will systematically explore key mappings to maximize decryption accuracy. 3.2 Metaheuristics Framework The decryption of the Bacon Cipher using metaheuristic techniques follows a structured framework designed to iteratively improve the accuracy of recovered plaintext. The framework consists of the following essential phases: 3.2.1 Initialization • The decryption process begins with generating a diverse population of potential key mappings. • Each key mapping represents a binary-to-text transformation used to decode the ciphered message. • The population is initialized randomly or using a heuristic approach to include frequent letter patterns based on linguistic analysis. • Diversity in initialization is crucial to avoid premature convergence and ensure broad search space exploration. 3.2.2 Fitness Evaluation • The decrypted output for each candidate key is evaluated based on linguistic coherence. • A well-defined fitness function assigns scores to candidate solutions based on: o Letter frequency analysis: Comparison with statistical models of language. o N-gram analysis: Examining bigrams and trigrams for realistic word structures. o Dictionary validation: Checking if the decrypted message consists of valid words. o Entropy measurement: Assessing randomness levels to distinguish plaintext from gibberish. 3.2.3 Optimization • Algorithms iteratively refine the potential solutions using algorithm-specific operators like selection, mutation, crossover, or swarm-based adjustments. • The process continues over multiple iterations to maximize fitness and minimize errors. • Adaptive parameter tuning is applied to dynamically adjust algorithm parameters based on convergence rates. 3.2.4 Termination • The process stops when: o A predefined accuracy threshold is reached (e.g., 90%+ plaintext recognition). o The number of iterations reaches a limit. o No significant improvement is observed over multiple iterations. • If stagnation occurs, mechanisms like random restart or hybridization with another algorithm are employed. 3.3 Metaheuristic-Algorithms 3.3.1 Genetic Algorithm (GA) The Genetic Algorithm (GA) is an evolutionary computation method inspired by natural selection. It continuously refines key mappings over successive generations. Key Steps in GA: 1. Selection: o The best-performing decryption keys are selected based on their fitness scores. o Selection strategies include Roulette Wheel Selection, Tournament Selection, or Rank-Based Selection. 2. Crossover: o Two parent key mappings are combined to create offspring. o Different crossover techniques, like single-point crossover, uniform crossover, or two-point crossover, are explored to balance exploitation and exploration. 3. Mutation: o Small alterations are introduced to maintain population diversity. o Mutation techniques include bit-flipping in binary key representations or random symbol swaps. 4. Survivor Selection: o The next generation is chosen based on fitness scores. o Elitism is applied to retain top-performing candidates from the previous generation. GA continues evolving populations until an optimal or near-optimal key mapping is found. ________________________________________ 3.3.2 Particle Swarm Optimization (PSO) The PSO algorithm is inspired by swarm intelligence, where key candidates are modeled as particles moving in a search space. Key Steps in PSO: 1. Each particle represents a potential decryption key mapping. 2. Particles adjust their positions based on: o Their own best-known decryption accuracy (pBest). o The best performance achieved by any particle in the swarm (gBest). 3. Velocity updates help particles explore new solutions while converging towards optimal ones. 4. The process continues until a stopping criterion is met. Advantages: • PSO converges faster than GA due to less computational overhead. • It maintains a balance between exploration and exploitation using inertia weight adjustments. ________________________________________ 3.3.3 Cuckoo Search (CS) Cuckoo Search is a nature-inspired metaheuristic algorithm based on the brood parasitism behavior of cuckoo birds. Key Steps in CS: 1. Each cuckoo (solution) lays an egg (new key mapping) in a randomly chosen nest. 2. A fraction of worse solutions are discarded based on a discovery probability (Pa). 3. New solutions are generated using Levy flights, ensuring large-scale exploration. 4. The best solutions are retained in the population. Advantages: • CS efficiently explores global solutions due to non-local random walks. • It outperforms GA and PSO in problems requiring random jumps in the search space. ________________________________________ 3.3.4 Grey Wolf Optimizer (GWO) GWO models the hunting behavior of grey wolves. It introduces hierarchical leadership dynamics among alpha (𝛼), beta (𝛽), delta (𝛿), and omega (𝜔) wolves. Key Steps in GWO: 1. The alpha wolf (𝛼) represents the best decryption candidate. 2. The beta (𝛽) and delta (𝛿) wolves guide the search by adjusting solutions. 3. The omega wolves (𝜔) explore new potential key mappings. 4. The entire pack iteratively refines decryption accuracy through dynamic adjustments. Advantages: • Fast convergence due to its leadership-driven exploration-exploitation mechanism. • Less prone to stagnation compared to GA and PSO. ________________________________________ 3.3.5 Red Deer Algorithm (RDA) RDA simulates dominance behavior in red deer populations, where dominant males compete for mating rights. Key Steps in RDA: 1. The dominant males represent the best decryption keys. 2. Subordinate males and hinds (females) contribute by exploring alternative decryption mappings. 3. Only high-ranking solutions survive, while weaker solutions are eliminated. Advantages: • RDA is highly adaptive for cryptanalysis problems. • It introduces hierarchical selection, improving search efficiency 4. Comparative Analysis The research will meticulously compare these algorithms across key performance metrics: Decryption Accuracy: The primary metric will be the accuracy of the decrypted plaintext, measured by its linguistic plausibility, adherence to English language patterns, and the correctness of recovered keywords. Convergence Speed: The algorithms will be evaluated based on the speed at which they converge towards the optimal solution, measured by the number of iterations required to achieve a satisfactory decryption. Computational Efficiency: The computational complexity of each algorithm will be assessed, considering factors such as execution time and resource consumption. Robustness: The algorithms will be tested on a diverse set of ciphertexts with varying key lengths and complexities to evaluate their robustness and generalizability. 5. Experimental Setup 1. Data Preparation: o Encrypted Bacon cipher texts of varying lengths. o Training models with known plaintext messages for performance benchmarking. 2. Implementation: o Algorithms implemented using Python (NumPy, SciPy, NLTK, DEAP, PySwarm). o Execution on high-performance computing clusters for enhanced speed. 3. Evaluation Metrics: o Word accuracy rate (WAR) for correctness. o Time to convergence for efficiency. 6. Expected Outcomes • Identification of the best-performing metaheuristic for Bacon Cipher decryption. • Performance comparison across different techniques based on accuracy, speed, and robustness. • Insights into computational trade-offs, aiding future cryptanalysis research. • Development of an optimized framework for applying AI-based cryptanalysis in real-world encryption systems. 7. Timeline (Total: 8 Weeks) Phase Duration Literature Review 1 Week Algorithm Design & Parameter Tuning 2 Weeks Implementation of Algorithms 2 Weeks Testing & Performance Evaluation 2 Weeks Report Writing & Documentation 1 Week 8. Conclusion his proposal presents an in-depth approach to decrypting the Bacon Cipher using metaheuristic algorithms. By leveraging the strengths of GA, PSO, CS, GWO, and RDA, this research aims to improve classical cryptanalysis techniques and broaden the understanding of heuristic-based decryption methods. The study's findings will contribute to future advancements in computational cryptography and information security, paving the way for more efficient cryptanalysis strategies in modern security applications. so basicly want to wite research paper on the same an do study on the same . so please write codes to implemte these also tell me how to implent and study and record the stats in of our observation in windoes . and write coeds for all apioches and guid me in very detail in singles chat onlyand give allconent and links to sudy about same i wna to usse only c++ for its implemtation or python as i knwo only htose langauges
I'd be happy to help you implement a comparative study of metaheuristic algorithms for Bacon Cipher decryption. This is an interesting cryptanalysis project that combines classical cryptography with modern optimization techniques.
I'll help you implement a comprehensive framework to study the effectiveness of different metaheuristic algorithms for Bacon Cipher decryption. Let's start with the basics and then build up to the complete implementation.
The Bacon Cipher is a steganographic technique that encodes each letter as a 5-bit binary sequence (traditionally using two different typefaces). Let's first implement the Bacon Cipher encoding and decoding functions.
code
Now, let's implement a fitness function that will evaluate how good a potential decryption is. This will be used by all metaheuristic algorithms.
code
Now, let's implement each of the metaheuristic algorithms for Bacon Cipher decryption.
code
code
code
code
code
Now, let's create a framework to compare the performance of these algorithms:
code
Let's create a main script that will run the entire analysis:
nodejs
Here's a step-by-step guide on how to run and study this implementation:
Setup your environment:
pip install numpy matplotlib nltkDownload NLTK data: Run Python and execute:
import nltk
nltk.download('words')
nltk.download('brown')Create the project structure:
Run the analysis:
Collect and analyze results:
Extend the study:
Methodology:
Results Analysis:
Discussion:
Visualization:
Metaheuristic Algorithms:
Cryptography:
Python for Scientific Computing:
Online Resources:
I hope this comprehensive implementation helps with your research project on Bacon Cipher decryption using metaheuristic algorithms. Feel free to ask if you need any clarification or have questions about specific parts of the implementation!
Yes, you can absolutely implement this entire project in Google Colab! This is actually an excellent choice because:
Below is a complete Colab notebook implementation with cell-by-cell code. You can copy each cell directly into Colab and run them sequentially.
# Install required packages
!pip install nltk matplotlib numpy
# Download NLTK data
import nltk
nltk.download('words')
nltk.download('brown')
# Import necessary libraries
import random
import time
import numpy as np
import matplotlib.pyplot as plt
from nltk.util import ngrams
from nltk.corpus import words, brown
import re
from google.colab import filesclass BaconCipher:
def __init__(self, version=1):
"""Initialize the Bacon Cipher with either version 1 (I=J, U=V) or version 2 (distinct letters)"""
self.version = version
if version == 1:
# Version 1: I/J and U/V are treated as the same letter
self.alphabet = {
'A': 'aaaaa', 'B': 'aaaab', 'C': 'aaaba', 'D': 'aaabb', 'E': 'aabaa',
'F': 'aabab', 'G': 'aabba', 'H': 'aabbb', 'I': 'abaaa', 'J': 'abaaa',
'K': 'abaab', 'L': 'ababa', 'M': 'ababb', 'N': 'abbaa', 'O': 'abbab',
'P': 'abbba', 'Q': 'abbbb', 'R': 'baaaa', 'S': 'baaab', 'T': 'baaba',
'U': 'baabb', 'V': 'baabb', 'W': 'babaa', 'X': 'babab', 'Y': 'babba',
'Z': 'babbb'
}
else:
# Version 2: All 26 letters are distinct
self.alphabet = {
'A': 'aaaaa', 'B': 'aaaab', 'C': 'aaaba', 'D': 'aaabb', 'E': 'aabaa',
'F': 'aabab', 'G': 'aabba', 'H': 'aabbb', 'I': 'abaaa', 'J': 'abaab',
'K': 'ababa', 'L': 'ababb', 'M': 'abbaa', 'N': 'abbab', 'O': 'abbba',
'P': 'abbbb', 'Q': 'baaaa', 'R': 'baaab', 'S': 'baaba', 'T': 'baabb',
'U': 'babaa', 'V': 'babab', 'W': 'babba', 'X': 'babbb', 'Y': 'bbaaa',
'Z': 'bbaab'
}
# Create reverse mapping for decoding
self.reverse_alphabet = {v: k for k, v in self.alphabet.items()}
def encode(self, plaintext):
"""Encode plaintext using Bacon Cipher"""
plaintext = plaintext.upper()
encoded = []
for char in plaintext:
if char in self.alphabet:
encoded.append(self.alphabet[char])
elif char.isspace():
encoded.append(' ')
return ''.join(encoded)
def decode(self, ciphertext):
"""Decode Bacon Cipher text"""
# Split the ciphertext into 5-character chunks
chunks = []
current_chunk = ""
for char in ciphertext:
if char in 'ab':
current_chunk += char
if len(current_chunk) == 5:
chunks.append(current_chunk)
current_chunk = ""
elif char.isspace():
if current_chunk: # Handle incomplete chunks
chunks.append(current_chunk)
current_chunk = ""
chunks.append(' ')
# Handle any remaining characters
if current_chunk:
chunks.append(current_chunk)
# Decode each chunk
decoded = []
for chunk in chunks:
if chunk == ' ':
decoded.append(' ')
elif chunk in self.reverse_alphabet:
decoded.append(self.reverse_alphabet[chunk])
else:
# Handle invalid chunks
decoded.append('?')
return ''.join(decoded)
# Test the Bacon Cipher
bacon = BaconCipher()
plaintext = "THE QUICK BROWN FOX JUMPS OVER THE LAZY DOG"
ciphertext = bacon.encode(plaintext)
decoded = bacon.decode(ciphertext)
print(f"Original: {plaintext}")
print(f"Encoded: {ciphertext}")
print(f"Decoded: {decoded}")class FitnessEvaluator:
def __init__(self):
# English letter frequency (from most common to least common)
self.letter_freq = {
'E': 0.1202, 'T': 0.0910, 'A': 0.0812, 'O': 0.0768, 'I': 0.0731,
'N': 0.0695, 'S': 0.0628, 'R': 0.0602, 'H': 0.0592, 'D': 0.0432,
'L': 0.0398, 'U': 0.0288, 'C': 0.0271, 'M': 0.0261, 'F': 0.0230,
'Y': 0.0211, 'W': 0.0209, 'G': 0.0203, 'P': 0.0182, 'B': 0.0149,
'V': 0.0111, 'K': 0.0069, 'X': 0.0017, 'Q': 0.0011, 'J': 0.0010, 'Z': 0.0007
}
# Load English words
self.english_words = set(word.lower() for word in words.words())
# Create bigram and trigram frequency dictionaries from Brown corpus
brown_text = ' '.join(brown.words()).lower()
self.bigrams = self._get_ngram_freq(brown_text, 2)
self.trigrams = self._get_ngram_freq(brown_text, 3)
def _get_ngram_freq(self, text, n):
"""Calculate frequency of n-grams in text"""
n_grams = ngrams(text, n)
freq_dict = {}
total = 0
for gram in n_grams:
gram_str = ''.join(gram)
if gram_str in freq_dict:
freq_dict[gram_str] += 1
else:
freq_dict[gram_str] = 1
total += 1
# Normalize frequencies
for gram in freq_dict:
freq_dict[gram] /= total
return freq_dict
def letter_frequency_score(self, text):
"""Calculate letter frequency score"""
text = text.upper()
letter_count = {}
total_letters = 0
for char in text:
if char in self.letter_freq:
letter_count[char] = letter_count.get(char, 0) + 1
total_letters += 1
if total_letters == 0:
return 0
# Calculate frequency distribution in the text
observed_freq = {char: count / total_letters for char, count in letter_count.items()}
# Calculate the difference between observed and expected frequencies
freq_diff = 0
for char in self.letter_freq:
expected = self.letter_freq[char]
observed = observed_freq.get(char, 0)
freq_diff += abs(expected - observed)
# Normalize to get a score between 0 and 1 (1 being perfect match)
return 1 - (freq_diff / 2) # Division by 2 because sum of absolute differences can be at most 2
def ngram_score(self, text, n=2):
"""Calculate n-gram score"""
text = text.lower()
text_ngrams = [''.join(g) for g in ngrams(text, n)]
if not text_ngrams:
return 0
ngram_dict = self.bigrams if n == 2 else self.trigrams
# Calculate the average log probability of n-grams
log_prob_sum = 0
count = 0
for gram in text_ngrams:
if gram in ngram_dict:
log_prob_sum += np.log(ngram_dict[gram] + 1e-10) # Add small value to avoid log(0)
count += 1
if count == 0:
return 0
# Normalize and convert to a score between 0 and 1
avg_log_prob = log_prob_sum / count
# Typical log probabilities for English text are around -10 to -2
normalized_score = (avg_log_prob + 15) / 15 # Adjust these values based on your data
return max(0, min(normalized_score, 1)) # Clamp between 0 and 1
def word_recognition_score(self, text):
"""Calculate word recognition score"""
# Clean and tokenize the text
text = text.lower()
words_in_text = re.findall(r'\b[a-z]{2,}\b', text)
if not words_in_text:
return 0
# Count how many words are in the English dictionary
valid_words = sum(1 for word in words_in_text if word in self.english_words)
# Return the proportion of valid words
return valid_words / len(words_in_text)
def evaluate(self, text):
"""Evaluate the fitness of a decrypted text"""
# Weights for different components
w1, w2, w3, w4 = 0.3, 0.3, 0.3, 0.1
# Calculate individual scores
letter_score = self.letter_frequency_score(text)
bigram_score = self.ngram_score(text, 2)
trigram_score = self.ngram_score(text, 3)
word_score = self.word_recognition_score(text)
# Combine scores
total_score = w1 * letter_score + w2 * bigram_score + w3 * trigram_score + w4 * word_score
return total_score
# Test the fitness evaluator
evaluator = FitnessEvaluator()
test_text = "THE QUICK BROWN FOX JUMPS OVER THE LAZY DOG"
fitness = evaluator.evaluate(test_text)
print(f"Fitness of original text: {fitness:.4f}")
# Test with scrambled text
scrambled = "HET KCIUQ NWORB XOF SPMUJ REVO EHT YZAL GOD"
scrambled_fitness = evaluator.evaluate(scrambled)
print(f"Fitness of scrambled text: {scrambled_fitness:.4f}")class GeneticAlgorithm:
def __init__(self, ciphertext, population_size=50, generations=100,
crossover_rate=0.8, mutation_rate=0.2, elitism=2):
self.ciphertext = ciphertext
self.population_size = population_size
self.generations = generations
self.crossover_rate = crossover_rate
self.mutation_rate = mutation_rate
self.elitism = elitism
self.fitness_evaluator = FitnessEvaluator()
self.bacon = BaconCipher()
# All possible 5-bit binary patterns
self.all_patterns = [''.join(p) for p in
[list(format(i, '05b').replace('0', 'a').replace('1', 'b'))
for i in range(32)]]
# The 26 letters of the alphabet
self.letters = 'ABCDEFGHIJKLMNOPQRSTUVWXYZ'
def _create_individual(self):
"""Create a random mapping from binary patterns to letters"""
# Select 26 patterns from the 32 possible patterns
selected_patterns = random.sample(self.all_patterns, 26)
# Create a mapping from patterns to letters
mapping = {}
for i, letter in enumerate(self.letters):
mapping[selected_patterns[i]] = letter
return mapping
def _initialize_population(self):
"""Initialize a population of random mappings"""
return [self._create_individual() for _ in range(self.population_size)]
def _decode_with_mapping(self, mapping):
"""Decode the ciphertext using a given mapping"""
# Split the ciphertext into 5-character chunks
chunks = []
current_chunk = ""
for char in self.ciphertext:
if char in 'ab':
current_chunk += char
if len(current_chunk) == 5:
chunks.append(current_chunk)
current_chunk = ""
elif char.isspace():
if current_chunk: # Handle incomplete chunks
chunks.append(current_chunk)
current_chunk = ""
chunks.append(' ')
# Handle any remaining characters
if current_chunk:
chunks.append(current_chunk)
# Decode each chunk
decoded = []
for chunk in chunks:
if chunk == ' ':
decoded.append(' ')
elif chunk in mapping:
decoded.append(mapping[chunk])
else:
# Handle patterns not in the mapping
decoded.append('?')
return ''.join(decoded)
def _evaluate_fitness(self, individual):
"""Evaluate the fitness of an individual mapping"""
decoded_text = self._decode_with_mapping(individual)
return self.fitness_evaluator.evaluate(decoded_text)
def _select_parent(self, population, fitnesses):
"""Select a parent using tournament selection"""
tournament_size = 3
tournament_indices = random.sample(range(len(population)), tournament_size)
tournament_fitnesses = [fitnesses[i] for i in tournament_indices]
winner_idx = tournament_indices[np.argmax(tournament_fitnesses)]
return population[winner_idx]
def _crossover(self, parent1, parent2):
"""Perform crossover between two parents"""
if random.random() > self.crossover_rate:
return parent1.copy()
# Get the keys (patterns) from both parents
keys1 = list(parent1.keys())
keys2 = list(parent2.keys())
# Choose a crossover point
crossover_point = random.randint(1, 25)
# Create child by combining parts from both parents
child = {}
# Add first part from parent1
for i in range(crossover_point):
child[keys1[i]] = parent1[keys1[i]]
# Add remaining letters from parent2, avoiding duplicates
used_letters = set(child.values())
for key in keys2:
letter = parent2[key]
if letter not in used_letters and len(child) < 26:
child[key] = letter
used_letters.add(letter)
# If child doesn't have all 26 letters, fill in the missing ones
if len(child) < 26:
missing_letters = set(self.letters) - used_letters
missing_patterns = set(self.all_patterns) - set(child.keys())
for letter in missing_letters:
pattern = random.choice(list(missing_patterns))
child[pattern] = letter
missing_patterns.remove(pattern)
return child
def _mutate(self, individual):
"""Mutate an individual by swapping mappings"""
if random.random() > self.mutation_rate:
return individual
# Create a copy of the individual
mutated = individual.copy()
# Choose two random patterns to swap
patterns = list(mutated.keys())
idx1, idx2 = random.sample(range(len(patterns)), 2)
# Swap the letters
mutated[patterns[idx1]], mutated[patterns[idx2]] = mutated[patterns[idx2]], mutated[patterns[idx1]]
return mutated
def run(self):
"""Run the genetic algorithm"""
# Initialize population
population = self._initialize_population()
best_individual = None
best_fitness = -1
best_text = ""
# Statistics to track
stats = {
'best_fitness_per_generation': [],
'avg_fitness_per_generation': []
}
# Evolution loop
for generation in range(self.generations):
# Evaluate fitness for each individual
fitnesses = [self._evaluate_fitness(ind) for ind in population]
# Track statistics
avg_fitness = np.mean(fitnesses)
max_fitness_idx = np.argmax(fitnesses)
max_fitness = fitnesses[max_fitness_idx]
stats['best_fitness_per_generation'].append(max_fitness)
stats['avg_fitness_per_generation'].append(avg_fitness)
# Update best individual if better
if max_fitness > best_fitness:
best_individual = population[max_fitness_idx]
best_fitness = max_fitness
best_text = self._decode_with_mapping(best_individual)
print(f"Generation {generation}: New best fitness = {best_fitness:.4f}")
print(f"Decoded text: {best_text[:100]}...")
# Create new population
new_population = []
# Elitism: keep the best individuals
sorted_indices = np.argsort(fitnesses)[::-1]
for i in range(self.elitism):
new_population.append(population[sorted_indices[i]])
# Create rest of the new population
while len(new_population) < self.population_size:
# Select parents
parent1 = self._select_parent(population, fitnesses)
parent2 = self._select_parent(population, fitnesses)
# Create child through crossover and mutation
child = self._crossover(parent1, parent2)
child = self._mutate(child)
new_population.append(child)
# Replace old population
population = new_population
return {
'best_individual': best_individual,
'best_fitness': best_fitness,
'best_text': best_text,
'stats': stats
}class ParticleSwarmOptimization:
def __init__(self, ciphertext, num_particles=30, iterations=100,
w=0.7, c1=1.5, c2=1.5):
self.ciphertext = ciphertext
self.num_particles = num_particles
self.iterations = iterations
self.w = w # Inertia weight
self.c1 = c1 # Cognitive coefficient
self.c2 = c2 # Social coefficient
self.fitness_evaluator = FitnessEvaluator()
self.bacon = BaconCipher()
# All possible 5-bit binary patterns
self.all_patterns = [''.join(p) for p in
[list(format(i, '05b').replace('0', 'a').replace('1', 'b'))
for i in range(32)]]
# The 26 letters of the alphabet
self.letters = 'ABCDEFGHIJKLMNOPQRSTUVWXYZ'
def _create_mapping(self):
"""Create a random mapping from binary patterns to letters"""
# Select 26 patterns from the 32 possible patterns
selected_patterns = random.sample(self.all_patterns, 26)
# Create a mapping from patterns to letters
mapping = {}
for i, letter in enumerate(self.letters):
mapping[selected_patterns[i]] = letter
return mapping
def _decode_with_mapping(self, mapping):
"""Decode the ciphertext using a given mapping"""
# Split the ciphertext into 5-character chunks
chunks = []
current_chunk = ""
for char in self.ciphertext:
if char in 'ab':
current_chunk += char
if len(current_chunk) == 5:
chunks.append(current_chunk)
current_chunk = ""
elif char.isspace():
if current_chunk: # Handle incomplete chunks
chunks.append(current_chunk)
current_chunk = ""
chunks.append(' ')
# Handle any remaining characters
if current_chunk:
chunks.append(current_chunk)
# Decode each chunk
decoded = []
for chunk in chunks:
if chunk == ' ':
decoded.append(' ')
elif chunk in mapping:
decoded.append(mapping[chunk])
else:
# Handle patterns not in the mapping
decoded.append('?')
return ''.join(decoded)
def _evaluate_fitness(self, mapping):
"""Evaluate the fitness of a mapping"""
decoded_text = self._decode_with_mapping(mapping)
return self.fitness_evaluator.evaluate(decoded_text)
def _swap_two_letters(self, mapping):
"""Swap two random letters in the mapping"""
new_mapping = mapping.copy()
# Choose two random patterns
patterns = list(new_mapping.keys())
idx1, idx2 = random.sample(range(len(patterns)), 2)
# Swap the letters
new_mapping[patterns[idx1]], new_mapping[patterns[idx2]] = new_mapping[patterns[idx2]], new_mapping[patterns[idx1]]
return new_mapping
def _update_position(self, current, pbest, gbest):
"""Update position (mapping) based on PSO equations"""
# Determine how many swaps to make based on cognitive and social components
r1, r2 = random.random(), random.random()
# Cognitive component: move towards personal best
cognitive_swaps = int(self.c1 * r1 * 5) # Max 5 swaps
# Social component: move towards global best
social_swaps = int(self.c2 * r2 * 5) # Max 5 swaps
# Inertia component: random exploration
inertia_swaps = int(self.w * 3) # Max 3 swaps
# Create a new mapping starting from the current one
new_mapping = current.copy()
# Apply inertia (random exploration)
for _ in range(inertia_swaps):
new_mapping = self._swap_two_letters(new_mapping)
# Apply cognitive component (move towards personal best)
# We'll identify differences between current and pbest and make some of those changes
current_items = list(current.items())
pbest_items = list(pbest.items())
differences = []
for i, (pattern, letter) in enumerate(current_items):
if pbest_items[i][1] != letter:
differences.append((i, pbest_items[i][1]))
# Apply some of the differences
if differences and cognitive_swaps > 0:
swaps = min(cognitive_swaps, len(differences))
for i, new_letter in random.sample(differences, swaps):
pattern = current_items[i][0]
# Find where the new letter currently is
for p, l in new_mapping.items():
if l == new_letter:
# Swap the letters
new_mapping[p] = new_mapping[pattern]
new_mapping[pattern] = new_letter
break
# Apply social component (move towards global best)
# Similar to cognitive, but with global best
gbest_items = list(gbest.items())
differences = []
for i, (pattern, letter) in enumerate(current_items):
if gbest_items[i][1] != letter:
differences.append((i, gbest_items[i][1]))
# Apply some of the differences
if differences and social_swaps > 0:
swaps = min(social_swaps, len(differences))
for i, new_letter in random.sample(differences, swaps):
pattern = current_items[i][0]
# Find where the new letter currently is
for p, l in new_mapping.items():
if l == new_letter:
# Swap the letters
new_mapping[p] = new_mapping[pattern]
new_mapping[pattern] = new_letter
break
return new_mapping
def run(self):
"""Run the PSO algorithm"""
# Initialize particles
particles = [self._create_mapping() for _ in range(self.num_particles)]
# Initialize personal best for each particle
pbest = particles.copy()
pbest_fitness = [self._evaluate_fitness(p) for p in pbest]
# Initialize global best
gbest_idx = np.argmax(pbest_fitness)
gbest = pbest[gbest_idx]
gbest_fitness = pbest_fitness[gbest_idx]
best_text = self._decode_with_mapping(gbest)
# Statistics to track
stats = {
'best_fitness_per_iteration': [],
'avg_fitness_per_iteration': []
}
# Main PSO loop
for iteration in range(self.iterations):
# Update each particle
for i in range(self.num_particles):
# Update particle position
particles[i] = self._update_position(particles[i], pbest[i], gbest)
# Evaluate new position
fitness = self._evaluate_fitness(particles[i])
# Update personal best if better
if fitness > pbest_fitness[i]:
pbest[i] = particles[i].copy()
pbest_fitness[i] = fitness
# Update global best if better
if fitness > gbest_fitness:
gbest = particles[i].copy()
gbest_fitness = fitness
best_text = self._decode_with_mapping(gbest)
print(f"Iteration {iteration}: New best fitness = {gbest_fitness:.4f}")
print(f"Decoded text: {best_text[:100]}...")
# Track statistics
avg_fitness = np.mean([self._evaluate_fitness(p) for p in particles])
stats['best_fitness_per_iteration'].append(gbest_fitness)
stats['avg_fitness_per_iteration'].append(avg_fitness)
# Decrease inertia weight over time for better convergence
self.w = max(0.4, self.w * 0.99)
return {
'best_individual': gbest,
'best_fitness': gbest_fitness,
'best_text': best_text,
'stats': stats
}class CuckooSearch:
def __init__(self, ciphertext, num_nests=25, iterations=100, pa=0.25, alpha=1.0):
self.ciphertext = ciphertext
self.num_nests = num_nests
self.iterations = iterations
self.pa = pa # Probability of abandoning nests
self.alpha = alpha # Scaling factor for Lévy flights
self.fitness_evaluator = FitnessEvaluator()
self.bacon = BaconCipher()
# All possible 5-bit binary patterns
self.all_patterns = [''.join(p) for p in
[list(format(i, '05b').replace('0', 'a').replace('1', 'b'))
for i in range(32)]]
# The 26 letters of the alphabet
self.letters = 'ABCDEFGHIJKLMNOPQRSTUVWXYZ'
def _create_nest(self):
"""Create a random mapping from binary patterns to letters"""
# Select 26 patterns from the 32 possible patterns
selected_patterns = random.sample(self.all_patterns, 26)
# Create a mapping from patterns to letters
mapping = {}
for i, letter in enumerate(self.letters):
mapping[selected_patterns[i]] = letter
return mapping
def _decode_with_mapping(self, mapping):
"""Decode the ciphertext using a given mapping"""
# Split the ciphertext into 5-character chunks
chunks = []
current_chunk = ""
for char in self.ciphertext:
if char in 'ab':
current_chunk += char
if len(current_chunk) == 5:
chunks.append(current_chunk)
current_chunk = ""
elif char.isspace():
if current_chunk: # Handle incomplete chunks
chunks.append(current_chunk)
current_chunk = ""
chunks.append(' ')
# Handle any remaining characters
if current_chunk:
chunks.append(current_chunk)
# Decode each chunk
decoded = []
for chunk in chunks:
if chunk == ' ':
decoded.append(' ')
elif chunk in mapping:
decoded.append(mapping[chunk])
else:
# Handle patterns not in the mapping
decoded.append('?')
return ''.join(decoded)
def _evaluate_fitness(self, mapping):
"""Evaluate the fitness of a mapping"""
decoded_text = self._decode_with_mapping(mapping)
return self.fitness_evaluator.evaluate(decoded_text)
def _levy_flight(self, mapping):
"""Perform a Lévy flight from the current mapping"""
new_mapping = mapping.copy()
# Lévy flight step size (number of swaps)
# Using a power law distribution for step size
step_size = int(1 + random.random() ** (-1/self.alpha) % 10)
# Perform the swaps
for _ in range(step_size):
# Choose two random patterns to swap
patterns = list(new_mapping.keys())
idx1, idx2 = random.sample(range(len(patterns)), 2)
# Swap the letters
new_mapping[patterns[idx1]], new_mapping[patterns[idx2]] = new_mapping[patterns[idx2]], new_mapping[patterns[idx1]]
return new_mapping
def _get_new_nest(self, mapping1, mapping2):
"""Create a new nest by combining two existing nests"""
new_mapping = mapping1.copy()
# Choose a random number of elements to take from mapping2
num_elements = random.randint(1, 13) # Up to half the mapping
# Choose random patterns from mapping1 to replace
patterns1 = list(new_mapping.keys())
patterns_to_replace = random.sample(patterns1, num_elements)
# Choose random patterns from mapping2 to use
patterns2 = list(mapping2.keys())
patterns_to_use = random.sample(patterns2, num_elements)
# Replace the letters
for i in range(num_elements):
# Get the letter from mapping2
letter2 = mapping2[patterns_to_use[i]]
# Find where this letter is in mapping1
for pattern, letter in new_mapping.items():
if letter == letter2:
# Swap with the pattern we want to replace
new_mapping[pattern] = new_mapping[patterns_to_replace[i]]
new_mapping[patterns_to_replace[i]] = letter2
break
return new_mapping
def run(self):
"""Run the Cuckoo Search algorithm"""
# Initialize nests
nests = [self._create_nest() for _ in range(self.num_nests)]
# Evaluate fitness for each nest
fitness = [self._evaluate_fitness(nest) for nest in nests]
# Find the best nest
best_idx = np.argmax(fitness)
best_nest = nests[best_idx].copy()
best_fitness = fitness[best_idx]
best_text = self._decode_with_mapping(best_nest)
# Statistics to track
stats = {
'best_fitness_per_iteration': [],
'avg_fitness_per_iteration': []
}
# Main CS loop
for iteration in range(self.iterations):
# Get a cuckoo randomly by Lévy flights
cuckoo_idx = random.randint(0, self.num_nests - 1)
new_nest = self._levy_flight(nests[cuckoo_idx])
# Evaluate new nest
new_fitness = self._evaluate_fitness(new_nest)
# Choose a random nest to compare with
random_idx = random.randint(0, self.num_nests - 1)
# Replace the nest if the new one is better
if new_fitness > fitness[random_idx]:
nests[random_idx] = new_nest
fitness[random_idx] = new_fitness
# Update best nest if better
if new_fitness > best_fitness:
best_nest = new_nest.copy()
best_fitness = new_fitness
best_text = self._decode_with_mapping(best_nest)
print(f"Iteration {iteration}: New best fitness = {best_fitness:.4f}")
print(f"Decoded text: {best_text[:100]}...")
# Abandon a fraction of worse nests and build new ones
# Sort nests by fitness
sorted_indices = np.argsort(fitness)
# Abandon worst nests
num_to_abandon = int(self.pa * self.num_nests)
for i in range(num_to_abandon):
worst_idx = sorted_indices[i]
# Create a new nest by combining two good nests
good_idx1, good_idx2 = random.sample(list(sorted_indices[-10:]), 2)
nests[worst_idx] = self._get_new_nest(nests[good_idx1], nests[good_idx2])
# Evaluate new nest
fitness[worst_idx] = self._evaluate_fitness(nests[worst_idx])
# Update best nest if better
if fitness[worst_idx] > best_fitness:
best_nest = nests[worst_idx].copy()
best_fitness = fitness[worst_idx]
best_text = self._decode_with_mapping(best_nest)
print(f"Iteration {iteration}: New best fitness = {best_fitness:.4f}")
print(f"Decoded text: {best_text[:100]}...")
# Track statistics
avg_fitness = np.mean(fitness)
stats['best_fitness_per_iteration'].append(best_fitness)
stats['avg_fitness_per_iteration'].append(avg_fitness)
return {
'best_individual': best_nest,
'best_fitness': best_fitness,
'best_text': best_text,
'stats': stats
}class GreyWolfOptimizer:
def __init__(self, ciphertext, num_wolves=30, iterations=100):
self.ciphertext = ciphertext
self.num_wolves = num_wolves
self.iterations = iterations
self.fitness_evaluator = FitnessEvaluator()
self.bacon = BaconCipher()
# All possible 5-bit binary patterns
self.all_patterns = [''.join(p) for p in
[list(format(i, '05b').replace('0', 'a').replace('1', 'b'))
for i in range(32)]]
# The 26 letters of the alphabet
self.letters = 'ABCDEFGHIJKLMNOPQRSTUVWXYZ'
def _create_wolf(self):
"""Create a random mapping from binary patterns to letters"""
# Select 26 patterns from the 32 possible patterns
selected_patterns = random.sample(self.all_patterns, 26)
# Create a mapping from patterns to letters
mapping = {}
for i, letter in enumerate(self.letters):
mapping[selected_patterns[i]] = letter
return mapping
def _decode_with_mapping(self, mapping):
"""Decode the ciphertext using a given mapping"""
# Split the ciphertext into 5-character chunks
chunks = []
current_chunk = ""
for char in self.ciphertext:
if char in 'ab':
current_chunk += char
if len(current_chunk) == 5:
chunks.append(current_chunk)
current_chunk = ""
elif char.isspace():
if current_chunk: # Handle incomplete chunks
chunks.append(current_chunk)
current_chunk = ""
chunks.append(' ')
# Handle any remaining characters
if current_chunk:
chunks.append(current_chunk)
# Decode each chunk
decoded = []
for chunk in chunks:
if chunk == ' ':
decoded.append(' ')
elif chunk in mapping:
decoded.append(mapping[chunk])
else:
# Handle patterns not in the mapping
decoded.append('?')
return ''.join(decoded)
def _evaluate_fitness(self, mapping):
"""Evaluate the fitness of a mapping"""
decoded_text = self._decode_with_mapping(mapping)
return self.fitness_evaluator.evaluate(decoded_text)
def _move_towards_leader(self, wolf, leader, a):
"""Move a wolf towards a leader based on GWO equations"""
new_mapping = wolf.copy()
# Calculate number of swaps based on the a parameter
num_swaps = int(a * 10) # Scale to a reasonable number
# Identify differences between wolf and leader
wolf_items = list(wolf.items())
leader_items = list(leader.items())
differences = []
for i, (pattern, letter) in enumerate(wolf_items):
if leader_items[i][1] != letter:
differences.append((i, leader_items[i][1]))
# Apply some of the differences
if differences and num_swaps > 0:
swaps = min(num_swaps, len(differences))
for i, new_letter in random.sample(differences, swaps):
pattern = wolf_items[i][0]
# Find where the new letter currently is
for p, l in new_mapping.items():
if l == new_letter:
# Swap the letters
new_mapping[p] = new_mapping[pattern]
new_mapping[pattern] = new_letter
break
return new_mapping
def run(self):
"""Run the Grey Wolf Optimizer algorithm"""
# Initialize wolves
wolves = [self._create_wolf() for _ in range(self.num_wolves)]
# Evaluate fitness for each wolf
fitness = [self._evaluate_fitness(wolf) for wolf in wolves]
# Sort wolves by fitness
sorted_indices = np.argsort(fitness)[::-1]
# Initialize alpha, beta, and delta wolves
alpha = wolves[sorted_indices[0]].copy()
alpha_fitness = fitness[sorted_indices[0]]
beta = wolves[sorted_indices[1]].copy()
beta_fitness = fitness[sorted_indices[1]]
delta = wolves[sorted_indices[2]].copy()
delta_fitness = fitness[sorted_indices[2]]
best_text = self._decode_with_mapping(alpha)
# Statistics to track
stats = {
'best_fitness_per_iteration': [],
'avg_fitness_per_iteration': []
}
# Main GWO loop
for iteration in range(self.iterations):
# Update a parameter (linearly decreasing from 2 to 0)
a = 2 - iteration * (2 / self.iterations)
# Update each wolf
for i in range(self.num_wolves):
# Create three new positions by moving towards alpha, beta, and delta
new_pos1 = self._move_towards_leader(wolves[i], alpha, a)
new_pos2 = self._move_towards_leader(wolves[i], beta, a)
new_pos3 = self._move_towards_leader(wolves[i], delta, a)
# Evaluate fitness for each new position
fitness1 = self._evaluate_fitness(new_pos1)
fitness2 = self._evaluate_fitness(new_pos2)
fitness3 = self._evaluate_fitness(new_pos3)
# Choose the best position
if fitness1 >= fitness2 and fitness1 >= fitness3:
wolves[i] = new_pos1
fitness[i] = fitness1
elif fitness2 >= fitness1 and fitness2 >= fitness3:
wolves[i] = new_pos2
fitness[i] = fitness2
else:
wolves[i] = new_pos3
fitness[i] = fitness3
# Sort wolves by fitness
sorted_indices = np.argsort(fitness)[::-1]
# Update alpha, beta, and delta wolves
if fitness[sorted_indices[0]] > alpha_fitness:
alpha = wolves[sorted_indices[0]].copy()
alpha_fitness = fitness[sorted_indices[0]]
best_text = self._decode_with_mapping(alpha)
print(f"Iteration {iteration}: New best fitness = {alpha_fitness:.4f}")
print(f"Decoded text: {best_text[:100]}...")
if fitness[sorted_indices[1]] > beta_fitness:
beta = wolves[sorted_indices[1]].copy()
beta_fitness = fitness[sorted_indices[1]]
if fitness[sorted_indices[2]] > delta_fitness:
delta = wolves[sorted_indices[2]].copy()
delta_fitness = fitness[sorted_indices[2]]
# Track statistics
avg_fitness = np.mean(fitness)
stats['best_fitness_per_iteration'].append(alpha_fitness)
stats['avg_fitness_per_iteration'].append(avg_fitness)
return {
'best_individual': alpha,
'best_fitness': alpha_fitness,
'best_text': best_text,
'stats': stats
}class RedDeerAlgorithm:
def __init__(self, ciphertext, num_deer=30, iterations=100,
num_males=10, num_hinds=20, roaring_power=0.4):
self.ciphertext = ciphertext
self.num_deer = num_deer
self.iterations = iterations
self.num_males = num_males
self.num_hinds = num_hinds
self.roaring_power = roaring_power # Power of male deer to attract hinds
self.fitness_evaluator = FitnessEvaluator()
self.bacon = BaconCipher()
# All possible 5-bit binary patterns
self.all_patterns = [''.join(p) for p in
[list(format(i, '05b').replace('0', 'a').replace('1', 'b'))
for i in range(32)]]
# The 26 letters of the alphabet
self.letters = 'ABCDEFGHIJKLMNOPQRSTUVWXYZ'
def _create_deer(self):
"""Create a random mapping from binary patterns to letters"""
# Select 26 patterns from the 32 possible patterns
selected_patterns = random.sample(self.all_patterns, 26)
# Create a mapping from patterns to letters
mapping = {}
for i, letter in enumerate(self.letters):
mapping[selected_patterns[i]] = letter
return mapping
def _decode_with_mapping(self, mapping):
"""Decode the ciphertext using a given mapping"""
# Split the ciphertext into 5-character chunks
chunks = []
current_chunk = ""
for char in self.ciphertext:
if char in 'ab':
current_chunk += char
if len(current_chunk) == 5:
chunks.append(current_chunk)
current_chunk = ""
elif char.isspace():
if current_chunk: # Handle incomplete chunks
chunks.append(current_chunk)
current_chunk = ""
chunks.append(' ')
# Handle any remaining characters
if current_chunk:
chunks.append(current_chunk)
# Decode each chunk
decoded = []
for chunk in chunks:
if chunk == ' ':
decoded.append(' ')
elif chunk in mapping:
decoded.append(mapping[chunk])
else:
# Handle patterns not in the mapping
decoded.append('?')
return ''.join(decoded)
def _evaluate_fitness(self, mapping):
"""Evaluate the fitness of a mapping"""
decoded_text = self._decode_with_mapping(mapping)
return self.fitness_evaluator.evaluate(decoded_text)
def _fight(self, male1, male1_fitness, male2, male2_fitness):
"""Simulate a fight between two male deer"""
# The winner is determined by fitness
if male1_fitness > male2_fitness:
winner = male1.copy()
loser = male2.copy()
winner_fitness = male1_fitness
else:
winner = male2.copy()
loser = male1.copy()
winner_fitness = male2_fitness
# The loser moves towards the winner
new_loser = self._move_towards(loser, winner, 0.5)
return winner, winner_fitness, new_loser
def _move_towards(self, deer1, deer2, power):
"""Move deer1 towards deer2 based on power parameter"""
new_deer = deer1.copy()
# Calculate number of swaps based on the power parameter
num_swaps = int(power * 10) # Scale to a reasonable number
# Identify differences between deer1 and deer2
deer1_items = list(deer1.items())
deer2_items = list(deer2.items())
differences = []
for i, (pattern, letter) in enumerate(deer1_items):
if deer2_items[i][1] != letter:
differences.append((i, deer2_items[i][1]))
# Apply some of the differences
if differences and num_swaps > 0:
swaps = min(num_swaps, len(differences))
for i, new_letter in random.sample(differences, swaps):
pattern = deer1_items[i][0]
# Find where the new letter currently is
for p, l in new_deer.items():
if l == new_letter:
# Swap the letters
new_deer[p] = new_deer[pattern]
new_deer[pattern] = new_letter
break
return new_deer
def _mate(self, male, hind):
"""Create a new deer by mating a male and a hind"""
# Create a new deer by combining male and hind
child = {}
# Get the keys (patterns) from both parents
male_keys = list(male.keys())
hind_keys = list(hind.keys())
# Choose a crossover point
crossover_point = random.randint(1, 25)
# Add first part from male
for i in range(crossover_point):
child[male_keys[i]] = male[male_keys[i]]
# Add remaining letters from hind, avoiding duplicates
used_letters = set(child.values())
for key in hind_keys:
letter = hind[key]
if letter not in used_letters and len(child) < 26:
child[key] = letter
used_letters.add(letter)
# If child doesn't have all 26 letters, fill in the missing ones
if len(child) < 26:
missing_letters = set(self.letters) - used_letters
missing_patterns = set(self.all_patterns) - set(child.keys())
for letter in missing_letters:
pattern = random.choice(list(missing_patterns))
child[pattern] = letter
missing_patterns.remove(pattern)
# Add some mutation
if random.random() < 0.2: # 20% chance of mutation
patterns = list(child.keys())
idx1, idx2 = random.sample(range(len(patterns)), 2)
child[patterns[idx1]], child[patterns[idx2]] = child[patterns[idx2]], child[patterns[idx1]]
return child
def run(self):
"""Run the Red Deer Algorithm"""
# Initialize deer population
deer = [self._create_deer() for _ in range(self.num_deer)]
# Evaluate fitness for each deer
fitness = [self._evaluate_fitness(d) for d in deer]
# Sort deer by fitness
sorted_indices = np.argsort(fitness)[::-1]
# Separate males and hinds
males = [deer[i].copy() for i in sorted_indices[:self.num_males]]
males_fitness = [fitness[i] for i in sorted_indices[:self.num_males]]
hinds = [deer[i].copy() for i in sorted_indices[self.num_males:self.num_males+self.num_hinds]]
hinds_fitness = [fitness[i] for i in sorted_indices[self.num_males:self.num_males+self.num_hinds]]
# Initialize best solution
best_deer = males[0].copy()
best_fitness = males_fitness[0]
best_text = self._decode_with_mapping(best_deer)
# Statistics to track
stats = {
'best_fitness_per_iteration': [],
'avg_fitness_per_iteration': []
}
# Main RDA loop
for iteration in range(self.iterations):
# 1. Roaring: Males compete with each other
for i in range(self.num_males):
# Choose a random opponent
opponent_idx = random.randint(0, self.num_males - 1)
if opponent_idx == i:
opponent_idx = (opponent_idx + 1) % self.num_males
# Fight
winner, winner_fitness, loser = self._fight(
males[i], males_fitness[i],
males[opponent_idx], males_fitness[opponent_idx]
)
# Update males
males[i] = winner
males_fitness[i] = winner_fitness
males[opponent_idx] = loser
males_fitness[opponent_idx] = self._evaluate_fitness(loser)
# Sort males by fitness
sorted_indices = np.argsort(males_fitness)[::-1]
males = [males[i] for i in sorted_indices]
males_fitness = [males_fitness[i] for i in sorted_indices]
# 2. Mating: Males mate with hinds
offspring = []
for i in range(self.num_males):
# Number of hinds attracted by this male
num_attracted = int(self.roaring_power * (self.num_males - i))
for _ in range(num_attracted):
# Choose a random hind
hind_idx = random.randint(0, self.num_hinds - 1)
# Mate and create offspring
child = self._mate(males[i], hinds[hind_idx])
offspring.append(child)
# 3. Update population
# Evaluate offspring
offspring_fitness = [self._evaluate_fitness(child) for child in offspring]
# Combine all deer
all_deer = males + hinds + offspring
all_fitness = males_fitness + hinds_fitness + offspring_fitness
# Sort by fitness
sorted_indices = np.argsort(all_fitness)[::-1]
# Keep the best deer
deer = [all_deer[i] for i in sorted_indices[:self.num_deer]]
fitness = [all_fitness[i] for i in sorted_indices[:self.num_deer]]
# Update males and hinds
males = [deer[i].copy() for i in range(self.num_males)]
males_fitness = [fitness[i] for i in range(self.num_males)]
hinds = [deer[i].copy() for i in range(self.num_males, self.num_males+self.num_hinds)]
hinds_fitness = [fitness[i] for i in range(self.num_males, self.num_males+self.num_hinds)]
# Update best solution
if fitness[0] > best_fitness:
best_deer = deer[0].copy()
best_fitness = fitness[0]
best_text = self._decode_with_mapping(best_deer)
print(f"Iteration {iteration}: New best fitness = {best_fitness:.4f}")
print(f"Decoded text: {best_text[:100]}...")
# Track statistics
avg_fitness = np.mean(fitness)
stats['best_fitness_per_iteration'].append(best_fitness)
stats['avg_fitness_per_iteration'].append(avg_fitness)
return {
'best_individual': best_deer,
'best_fitness': best_fitness,
'best_text': best_text,
'stats': stats
}class ComparativeAnalysis:
def __init__(self, plaintext, iterations=100):
self.plaintext = plaintext
self.iterations = iterations
# Initialize Bacon Cipher
self.bacon = BaconCipher()
# Encode the plaintext
self.ciphertext = self.bacon.encode(plaintext)
# Initialize fitness evaluator
self.fitness_evaluator = FitnessEvaluator()
# Results storage
self.results = {}
def run_algorithm(self, algorithm_name, algorithm_class, **kwargs):
"""Run a specific algorithm and record results"""
print(f"\nRunning {algorithm_name}...")
# Set iterations
if 'iterations' not in kwargs:
kwargs['iterations'] = self.iterations
# Initialize algorithm
algorithm = algorithm_class(self.ciphertext, **kwargs)
# Record start time
start_time = time.time()
# Run algorithm
result = algorithm.run()
# Record end time
end_time = time.time()
# Calculate execution time
execution_time = end_time - start_time
# Store results
self.results[algorithm_name] = {
'best_individual': result['best_individual'],
'best_fitness': result['best_fitness'],
'best_text': result['best_text'],
'stats': result['stats'],
'execution_time': execution_time
}
print(f"{algorithm_name} completed in {execution_time:.2f} seconds")
print(f"Best fitness: {result['best_fitness']:.4f}")
print(f"Decoded text: {result['best_text'][:100]}...")
return self.results[algorithm_name]
def run_all_algorithms(self):
"""Run all algorithms for comparison"""
# Genetic Algorithm
self.run_algorithm('GA', GeneticAlgorithm,
population_size=50, generations=self.iterations,
crossover_rate=0.8, mutation_rate=0.2, elitism=2)
# Particle Swarm Optimization
self.run_algorithm('PSO', ParticleSwarmOptimization,
num_particles=30, iterations=self.iterations,
w=0.7, c1=1.5, c2=1.5)
# Cuckoo Search
self.run_algorithm('CS', CuckooSearch,
num_nests=25, iterations=self.iterations,
pa=0.25, alpha=1.0)
# Grey Wolf Optimizer
self.run_algorithm('GWO', GreyWolfOptimizer,
num_wolves=30, iterations=self.iterations)
# Red Deer Algorithm
self.run_algorithm('RDA', RedDeerAlgorithm,
num_deer=30, iterations=self.iterations,
num_males=10, num_hinds=20, roaring_power=0.4)
return self.results
def plot_convergence(self):
"""Plot convergence curves for all algorithms"""
plt.figure(figsize=(12, 6))
for algorithm_name, result in self.results.items():
plt.plot(result['stats']['best_fitness_per_iteration'],
label=f"{algorithm_name} (Best: {result['best_fitness']:.4f})")
plt.xlabel('Iteration')
plt.ylabel('Fitness')
plt.title('Convergence Comparison')
plt.legend()
plt.grid(True)
plt.savefig('convergence_comparison.png')
plt.close()
# Also plot average fitness
plt.figure(figsize=(12, 6))
for algorithm_name, result in self.results.items():
plt.plot(result['stats']['avg_fitness_per_iteration'],
label=f"{algorithm_name} (Avg)")
plt.xlabel('Iteration')
plt.ylabel('Average Fitness')
plt.title('Average Fitness Comparison')
plt.legend()
plt.grid(True)
plt.savefig('avg_fitness_comparison.png')
plt.close()
def plot_execution_time(self):
"""Plot execution time for all algorithms"""
algorithms = list(self.results.keys())
times = [self.results[alg]['execution_time'] for alg in algorithms]
plt.figure(figsize=(10, 6))
plt.bar(algorithms, times)
plt.xlabel('Algorithm')
plt.ylabel('Execution Time (seconds)')
plt.title('Execution Time Comparison')
plt.grid(True, axis='y')
plt.savefig('execution_time_comparison.png')
plt.close()
def plot_final_fitness(self):
"""Plot final fitness for all algorithms"""
algorithms = list(self.results.keys())
fitness = [self.results[alg]['best_fitness'] for alg in algorithms]
plt.figure(figsize=(10, 6))
plt.bar(algorithms, fitness)
plt.xlabel('Algorithm')
plt.ylabel('Final Fitness')
plt.title('Final Fitness Comparison')
plt.grid(True, axis='y')
plt.savefig('final_fitness_comparison.png')
plt.close()
def generate_report(self):
"""Generate a comprehensive report of the results"""
report = "# Bacon Cipher Decryption - Comparative Analysis Report\n\n"
# Original text and ciphertext
report += f"## Original Text\n\n```\n{self.plaintext}\n```\n\n"
report += f"## Ciphertext\n\n```\n{self.ciphertext}\n```\n\n"
# Summary table
report += "## Summary\n\n"
report += "| Algorithm | Final Fitness | Execution Time (s) | Convergence Rate |\n"
report += "|-----------|--------------|-------------------|------------------|\n"
for alg in self.results:
# Calculate convergence rate (fitness gain per iteration)
stats = self.results[alg]['stats']
if len(stats['best_fitness_per_iteration']) > 1:
initial = stats['best_fitness_per_iteration'][0]
final = stats['best_fitness_per_iteration'][-1]
convergence_rate = (final - initial) / len(stats['best_fitness_per_iteration'])
else:
convergence_rate = 0
report += f"| {alg} | {self.results[alg]['best_fitness']:.4f} | {self.results[alg]['execution_time']:.2f} | {convergence_rate:.6f} |\n"
report += "\n\n"
# Detailed results for each algorithm
report += "## Detailed Results\n\n"
for alg in self.results:
report += f"### {alg}\n\n"
report += f"- **Best Fitness:** {self.results[alg]['best_
report += f"### {alg}\n\n"
report += f"- **Best Fitness:** {self.results[alg]['best_fitness']:.4f}\n"
report += f"- **Execution Time:** {self.results[alg]['execution_time']:.2f} seconds\n"
report += f"- **Decoded Text:** {self.results[alg]['best_text'][:200]}...\n\n"
# Save report to file
with open('comparative_analysis_report.md', 'w') as f:
f.write(report)
print("Report generated: comparative_analysis_report.md")
return report# Run the comparative analysis
# Sample text to encrypt and decrypt
sample_text = "THE QUICK BROWN FOX JUMPS OVER THE LAZY DOG"
# For a more challenging test, use a longer text
longer_text = """
CRYPTOGRAPHY HAS PLAYED A CRUCIAL ROLE IN SECURING INFORMATION
FROM UNAUTHORIZED ACCESS FOR CENTURIES THE BACON CIPHER DEVELOPED
BY SIR FRANCIS BACON IN THE SIXTEENTH CENTURY IS A CLASSICAL
STEGANOGRAPHIC CIPHER THAT HIDES MESSAGES IN THE FORM OF BINARY SEQUENCES
"""
# Create a comparative analysis instance
# Use a small number of iterations for testing, increase for better results
analysis = ComparativeAnalysis(sample_text, iterations=20)
# Run all algorithms
print("Starting comparative analysis...")
analysis.run_all_algorithms()
# Generate visualizations
print("Generating visualizations...")
analysis.plot_convergence()
analysis.plot_execution_time()
analysis.plot_final_fitness()
# Generate report
print("Generating report...")
report = analysis.generate_report()
# Download the generated files
files.download('convergence_comparison.png')
files.download('avg_fitness_comparison.png')
files.download('execution_time_comparison.png')
files.download('final_fitness_comparison.png')
files.download('comparative_analysis_report.md')
print("Analysis complete!")Yes, this implementation is absolutely practical for conducting research and including results in a research paper. Here's why:
Extensibility: You can easily extend this framework to:
Publication-Ready Outputs: The code generates:
The Google Colab environment is ideal for this research as it provides:
This implementation gives you a solid foundation to conduct a thorough comparative study that would be suitable for publication in cryptography, computational intelligence, or optimization research journals.