Page Menu
Home
GRNET
Search
Configure Global Search
Log In
Files
F905001
No One
Temporary
Actions
View File
Edit File
Delete File
View Transforms
Subscribe
Mute Notifications
Award Token
Flag For Later
Subscribers
None
File Metadata
Details
File Info
Storage
Attached
Created
Fri, Aug 29, 7:09 PM
Size
14 KB
Mime Type
text/x-diff
Expires
Sun, Aug 31, 7:09 PM (20 h, 29 m)
Engine
blob
Format
Raw Data
Handle
252384
Attached To
rARCHIVING archiving
View Options
diff --git a/app/models/client.rb b/app/models/client.rb
index d44df53..eb737b5 100644
--- a/app/models/client.rb
+++ b/app/models/client.rb
@@ -1,143 +1,143 @@
# Bacula Client class.
# All hosts that are getting backed up with Bacula have a Client entry, with
# attributes concerning the Client.
class Client < ActiveRecord::Base
establish_connection BACULA_CONF
self.table_name = "#{connection_config[:database]}.Client"
self.primary_key = :ClientId
alias_attribute :name, :Name
alias_attribute :uname, :Uname
alias_attribute :auto_prune, :AutoPrune
alias_attribute :file_retention, :FileRetention
alias_attribute :job_retention, :JobRetention
has_many :jobs, foreign_key: :ClientId
has_one :host, foreign_key: :name, primary_key: :Name
scope :for_user, ->(user_id) { joins(host: :users).where(users: { id: user_id }) }
DAY_SECS = 60 * 60 * 24
delegate :manually_inserted?, :origin, to: :host
# Fetches the client's job_templates that are already persisted to
# Bacula's configuration
#
# @return [ActiveRecord::Relation] of `JobTemplate`
def persisted_jobs
host.job_templates.where(baculized: true).includes(:fileset, :schedule)
end
# Fetches the client's performed jobs in reverse chronological order
#
# @return [ActiveRecord::Relation] of `Job`
def recent_jobs
- jobs.order(EndTime: :desc).includes(:file_set)
+ jobs.order(EndTime: :desc).includes(:file_set, :logs)
end
# Helper method. It shows the client's job retention,
# (which is expressed in seconds) in days.
#
# @return [Integer]
def job_retention_days
job_retention / DAY_SECS
end
# Helper method. It shows the client's file retention,
# (which is expressed in seconds) in days.
#
# @return [Integer]
def file_retention_days
file_retention / DAY_SECS
end
# Helper method for auto_prune
#
# @return [String] 'yes' or 'no'
def auto_prune_human
auto_prune == 1 ? 'yes' : 'no'
end
# Helper method for displayin the last job's datetime in a nice format.
def last_job_date_formatted
if job_time = last_job_datetime
I18n.l(job_time, format: :long)
end
end
# Helper method for fetching the last job's datetime
def last_job_datetime
jobs.backup_type.last.try(:end_time)
end
# Fetches the first and last job's end times.
#
# @return [Array] of datetimes in proper format
def backup_enabled_datetime_range
jobs.backup_type.pluck(:end_time).minmax.map { |x| x.strftime('%Y-%m-%d') }
end
# Shows if a client has any backup jobs to Bacule config
#
# @return [Boolean]
def is_backed_up?
jobs.backup_type.any?
end
# Shows the total file size of the jobs that run for a specific client
#
# @return [Integer] Size in Bytes
def backup_jobs_size
jobs.backup_type.map(&:job_bytes).sum
end
# Shows the total files' count for the jobs that run for a specific client
#
# @return [Integer] File count
def files_count
jobs.map(&:job_files).sum
end
# Fetches the client's jobs that are running at the moment
#
# @return [Integer]
def running_jobs
jobs.running.count
end
# Displays the bacula config that is generated from the client's
# host
#
# @return [String]
def bacula_config
return unless host
host.baculize_config.join("\n")
end
# Fetches the job ids that will construct the desired restore
#
# @param file_set_id[Integer] the fileset
# @param restore_point[Datetime] the restore point
#
# @return [Array] of ids
def get_job_ids(file_set_id, restore_point)
job_ids = {}
backup_jobs = jobs.backup_type.terminated.where(file_set_id: file_set_id)
backup_jobs = backup_jobs.where('EndTime < ?', restore_point) if restore_point
job_ids['F'] = backup_jobs.where(level: 'F').pluck(:JobId).last
return [] if job_ids['F'].nil?
job_ids['D'] = backup_jobs.where(level: 'D').where("JobId > ?", job_ids['F']).pluck(:JobId).last
job_ids['I'] = backup_jobs.where(level: 'I').
where("JobId > ?", job_ids['D'] || job_ids['F'] ).pluck(:JobId)
job_ids.values.flatten.compact
end
# Fetches the bacula filesets that are associated with the client
def file_sets
FileSet.joins(:jobs).where(Job: { JobId: job_ids }).uniq
end
end
diff --git a/app/models/job.rb b/app/models/job.rb
index 031d87b..128aa11 100644
--- a/app/models/job.rb
+++ b/app/models/job.rb
@@ -1,129 +1,145 @@
# Bacula Job table.
#
# The Job table contains one record for each Job run by Bacula.
# Thus normally, there will be one per day per machine added to the database.
# Note, the JobId is used to index Job records in the database, and it often is shown to the user
# in the Console program.
# However, care must be taken with its use as it is not unique from database to database.
# For example, the user may have a database for Client data saved on machine Rufus and another
# database for Client data saved on machine Roxie.
# In this case, the two database will each have JobIds that match those in another database.
# For a unique reference to a Job, see Job below.
#
# The Name field of the Job record corresponds to the Name resource record given in the
# Director's configuration file.
# Thus it is a generic name, and it will be normal to find many Jobs (or even all Jobs)
# with the same Name.
#
# The Job field contains a combination of the Name and the schedule time of the Job by the Director.
# Thus for a given Director, even with multiple Catalog databases, the Job will contain a unique
# name that represents the Job.
#
# For a given Storage daemon, the VolSessionId and VolSessionTime form a unique identification
# of the Job.
#
# This will be the case even if multiple Directors are using the same Storage daemon.
#
# The Job Type (or simply Type) can have one of the following values:
class Job < ActiveRecord::Base
establish_connection BACULA_CONF
self.table_name = "#{connection_config[:database]}.Job"
self.primary_key = :JobId
alias_attribute :job_id, :JobId
alias_attribute :job, :Job
alias_attribute :name, :Name
alias_attribute :type, :Type
alias_attribute :level, :Level
alias_attribute :client_id, :ClientId
alias_attribute :job_status, :JobStatus
alias_attribute :sched_time, :SchedTime
alias_attribute :start_time, :StartTime
alias_attribute :end_time, :EndTime
alias_attribute :real_end_time, :RealEndTime
alias_attribute :job_t_date, :JobTDate
alias_attribute :vol_session_id, :VolSessionId
alias_attribute :vol_session_time, :VolSessionTime
alias_attribute :job_files, :JobFiles
alias_attribute :job_bytes, :JobBytes
alias_attribute :read_bytes, :ReadBytes
alias_attribute :job_errors, :JobErrors
alias_attribute :job_missing_files, :JobMissingFiles
alias_attribute :pool_id, :PoolId
alias_attribute :file_set_id, :FileSetId
alias_attribute :prior_job_id, :PriorJobId
alias_attribute :purged_files, :PurgedFiles
alias_attribute :has_base, :HasBase
alias_attribute :has_cache, :HasCache
alias_attribute :reviewed, :Reviewed
alias_attribute :comment, :Comment
belongs_to :pool, foreign_key: :PoolId
belongs_to :file_set, foreign_key: :FileSetId
belongs_to :client, foreign_key: :ClientId
has_many :bacula_files, foreign_key: :JobId
has_many :base_files, foreign_key: :BaseJobId
has_many :job_media, foreign_key: :JobId
has_many :logs, foreign_key: :JobId
scope :running, -> { where(job_status: 'R') }
scope :terminated, -> { where(job_status: 'T') }
scope :backup_type, -> { where(type: 'B') }
scope :restore_type, -> { where(type: 'R') }
HUMAN_STATUS = {
'A' => 'Canceled by user',
'B' => 'Blocked',
'C' => 'Created, not yet running',
'D' => 'Verify found differences',
'E' => 'Terminated with errors',
'F' => 'Waiting for Client',
'M' => 'Waiting for media mount',
'R' => 'Running',
'S' => 'Waiting for Storage daemon',
'T' => 'Completed successfully',
'a' => 'SD despooling attributes',
'c' => 'Waiting for client resource',
'd' => 'Waiting on maximum jobs',
'e' => 'Non-fatal error',
'f' => 'Fatal error',
'i' => 'Doing batch insert file records',
'j' => 'Waiting for job resource',
'm' => 'Waiting for new media',
'p' => 'Waiting on higher priority jobs',
's' => 'Waiting for storage resource',
't' => 'Waiting on start time'
}
paginates_per 20
def level_human
{
'F' => 'Full',
'D' => 'Differential',
'I' => 'Incremental'
}[level]
end
+ # Extracts the job's compression info by looking at the job's
+ # logs
+ #
+ # @return [String] the compression
+ def compression
+ logs.map { |log| log.compression }.uniq.compact.first
+ end
+
+ # Extracts the job's encryption info by looking at the job's
+ # logs
+ #
+ # @return [String] the encryption
+ def encryption
+ logs.map { |log| log.encryption }.uniq.compact.first
+ end
+
def status_human
HUMAN_STATUS[job_status]
end
def fileset
file_set.try(:file_set) || '-'
end
def start_time_formatted
if start_time
I18n.l(start_time, format: :long)
end
end
def end_time_formatted
if end_time
I18n.l(end_time, format: :long)
end
end
end
diff --git a/app/models/log.rb b/app/models/log.rb
index 917d674..6a1f62a 100644
--- a/app/models/log.rb
+++ b/app/models/log.rb
@@ -1,24 +1,38 @@
# Bacula Log table.
#
# The Log table contains a log of all Job output.
class Log < ActiveRecord::Base
establish_connection BACULA_CONF
self.table_name = "#{connection_config[:database]}.Log"
self.primary_key = :LogId
alias_attribute :log_id, :LogId
alias_attribute :job_id, :JobId
alias_attribute :time, :Time
alias_attribute :log_text, :LogText
belongs_to :job, foreign_key: :JobId
paginates_per 20
def time_formatted
if time
I18n.l(time, format: :long)
end
end
+
+ # Extracts the log's compression info if there is any data available
+ #
+ # @return [String] the compression or nil
+ def compression
+ $1.strip if log_text =~ /.*Software Compression:(.*)\n.*/
+ end
+
+ # Extracts the log's encryption info if there is any data available
+ #
+ # @return [String] the encryption or nil
+ def encryption
+ $1.strip if log_text =~ /.*Encryption:(.*)\n.*/
+ end
end
diff --git a/app/views/admin/clients/_recent_job.html.erb b/app/views/admin/clients/_recent_job.html.erb
index 109626b..4eed9be 100644
--- a/app/views/admin/clients/_recent_job.html.erb
+++ b/app/views/admin/clients/_recent_job.html.erb
@@ -1,11 +1,13 @@
<tr class="<%= success_class(recent_job.job_status) %>">
<td><%= recent_job.name %></td>
<td><%= recent_job.job_id %></td>
<td><%= recent_job.level_human %></td>
<td><%= recent_job.fileset %></td>
<td><%= recent_job.start_time_formatted %></td>
<td><%= recent_job.end_time_formatted %></td>
<td><%= number_to_human_size(recent_job.job_bytes) %></td>
<td><%= number_by_magnitude(recent_job.job_files) %></td>
- <td><%= recent_job.status_human %>
+ <td><%= recent_job.status_human %></td>
+ <td><%= recent_job.encryption %></td>
+ <td><%= recent_job.compression %></td>
</tr>
diff --git a/app/views/admin/clients/_recent_jobs.html.erb b/app/views/admin/clients/_recent_jobs.html.erb
index 96514c6..951966e 100644
--- a/app/views/admin/clients/_recent_jobs.html.erb
+++ b/app/views/admin/clients/_recent_jobs.html.erb
@@ -1,22 +1,24 @@
<div class="col-xs-12">
<div class="table-responsive">
<table class="table table-striped table-bordered table-condensed">
<thead>
<tr>
<th>Name</th>
<th>JobId</th>
<th>Level</th>
<th>Fileset</th>
<th>Started At</th>
<th>Finished At</th>
<th>Bytes</th>
<th>Files</th>
<th>Status</th>
+ <th>Encryption</th>
+ <th>Compression</th>
</tr>
</thead>
<tbody>
<%= render partial: 'recent_job', collection: @jobs %>
</tbody>
</table>
</div>
</div>
diff --git a/app/views/clients/_recent_job.html.erb b/app/views/clients/_recent_job.html.erb
index 109626b..4eed9be 100644
--- a/app/views/clients/_recent_job.html.erb
+++ b/app/views/clients/_recent_job.html.erb
@@ -1,11 +1,13 @@
<tr class="<%= success_class(recent_job.job_status) %>">
<td><%= recent_job.name %></td>
<td><%= recent_job.job_id %></td>
<td><%= recent_job.level_human %></td>
<td><%= recent_job.fileset %></td>
<td><%= recent_job.start_time_formatted %></td>
<td><%= recent_job.end_time_formatted %></td>
<td><%= number_to_human_size(recent_job.job_bytes) %></td>
<td><%= number_by_magnitude(recent_job.job_files) %></td>
- <td><%= recent_job.status_human %>
+ <td><%= recent_job.status_human %></td>
+ <td><%= recent_job.encryption %></td>
+ <td><%= recent_job.compression %></td>
</tr>
diff --git a/app/views/clients/_recent_jobs.html.erb b/app/views/clients/_recent_jobs.html.erb
index 6a39c49..4a8f125 100644
--- a/app/views/clients/_recent_jobs.html.erb
+++ b/app/views/clients/_recent_jobs.html.erb
@@ -1,22 +1,24 @@
<div class="col-xs-12">
<div class="table-responsive">
<table class="table table-striped table-bordered table-condensed">
<thead>
<tr>
<th>Name</th>
<th>JobId</th>
<th>Level</th>
<th>Fileset</th>
<th>Started At</th>
<th>Finished At</th>
<th>Bytes</th>
<th>Files</th>
<th>Status</th>
+ <th>Encryption</th>
+ <th>Compression</th>
</tr>
</thead>
<tbody>
<%= render partial: 'clients/recent_job', collection: @jobs %>
</tbody>
</table>
</div>
</div>
diff --git a/app/views/kaminari/_gap.html.erb b/app/views/kaminari/_gap.html.erb
index 4167981..2a860c6 100644
--- a/app/views/kaminari/_gap.html.erb
+++ b/app/views/kaminari/_gap.html.erb
@@ -1,10 +1,10 @@
<%# Non-link tag that stands for skipped pages...
- available local variables
current_page: a page object for the currently displayed page
total_pages: total number of pages
per_page: number of items to fetch per page
remote: data-remote
-%>
<li>
- <span class="page gap"><%= t('views.pagination.truncate').html_safe %></span>
+ <span class="page gap">. . .</span>
</li>
Event Timeline
Log In to Comment