My Terminology
Agent: A person (simulated or real) who makes a choice when encountering another Agent.
Encounter: Two Agents meet, each having 2, 3, or 4 possible choices.  These two sets of choices form a two dimensional matrix of Payoffs for each Agent.  Each Agent assesses his/her options, and makes a choice without knowing what choice the other Agent will make.  An Encounter may have anywhere between 4 to 16 possible outcomes.
Encounter Matrix: All possible Payoffs (gains or losses) to the Agents when assessing an Encounter.  For spatial clarity, the Self-Agent's choices may be considered as rows.  The Other-Agent's choices may be considered as columns.  Each row/column combination is a cell, containing two Payoffs, one for each Agent.
D1
Self-Agent: The Agent of whom we refer in second or first person (you or I are making the decision).
Other-Agent: The Agent of whom we refer in the third person (he, she, or it whom we encounter).
Self-Gain: The amount of value/points/money the Self-Agent gains (or loses) after an Encounter.  Gains are positive, losses are negative.
Other-Gain: The amount of value/points/money the Other-Agent gains (or loses) after an Encounter.
Commonwealth: The total gain (or loss) both Agents acquire after an Encounter.  Commonwealth is the sum of Self-Gain and Other-Gain.  A positive ethical goal is to maximize Commonwealth.
Equality (vs. Inequality): The difference between Self-Gain and Other-Gain after an Encounter.  In the best of all possible worlds, we would prefer that both Agents make equal gains.  Inequality implies either exploitation or sacrifice on the part of one of the Agents.  A positive ethical goal is to minimize Inequality.
Payoffs: The amount of gain or loss for each Agent within each cell of an Encounter Matrix.  These values are randomly generated in each cell, subject to the Payoff Correlation.  The mean value for both Self and Other Payoffs within an Encounter Matrix is zero.
D2
Basic Strategy: An ethical decision strategy by which to make choices when encountering another Agent.  The primary goal is to maximize Self-Gain.  A secondary goal, in a positive ethical sense, is to also maximize Other-Gain.  I used nine Basic Strategies, four of which anticipate the Other-Agent's choice, five of which ignore the Other-Agent's possible choices.
Best Cell: Choose the row containing the one cell with the greatest Self-Gain Payoff.
Best Row: Choose the row with the greatest average Self-Gain.
Minimize Loss: Determine the worst possible Self-Gain Payoff within each row.  Choose the row with least worst payoff.
Least Surprise: Choose the row whose variation in Self-Gain Payoffs is minimum.
Most Surprise: Choose the row whose variation in Self-Gain Payoffs is maximum.
Assume Benevolence: Assume that Other-Agent will chose the column with the best average Self-Gain Payoffs across all cells within that column.  Choose the row containing the greatest Self-Gain cell Payoff within that column.
D3
Assume Death-Wish: Assume that Other-Agent will chose the column with the worst average Other-Gain Payoffs across all cells within that column.  Choose the row containing the greatest Self-Gain cell Payoff within that column.
Assume Persecution: Assume that Other-Agent will chose the column with the worst average Self-Gain Payoffs across all cells within that column.  Choose the row containing the greatest Self-Gain cell Payoff within that column.
Assume Selfish: Assume that Other-Agent will chose the column with the best average Other-Gain Payoffs across all cells within that column.  Choose the row containing the greatest Self-Gain cell Payoff within that column.
Altruism (vs. Antagonism): The amount (positive or negative) of Other-Gain the Self-Agent will consider in making a decision among rows.  Positive Altruism means that Self-Agent will seek to increase Other-Gain.  Negative Altruism (Antagonism) means that Self-Agent will seek to decrease Other-Gain.  Altruism is an adjustment made to the Basic Strategy.  Altruism may increase or decrease because of two factors (intrinsic or extrinsic).
Good (vs. Bad) Will: This is an unvarying amount of Altruism or Antagonism expressed by the Self-Agent.  It is inherent in Self-Agent, and does not change with encounter history.
Reciprocity: Altruism or Antagonism based on Other-Agent's previous choices.  Positive Reciprocity (Responsive) means that Self-Agent will respond to Other-Agent in the same manner (return Altruism for previous Altruism from Other-Agent, return Antagonism for previous Antagonism from Other-Agent).  This is analogous to the Prisoner's Dilemma Tit-for-Tat strategy.  Negative Reciprocity (Contrary) means that Self-Agent will respond to Other-Agent in the opposite manner (return Altruism for previous Antagonism from Other-Agent, return Antagonism for previous Altruism from Other-Agent).
D4
Scenario: A long series of random Encounters among a population of Agents.  Each possible ethical strategy was represented by at least 3 Agents.  Each Agent encountered every other Agent at least 20 times within a Scenario.  The average quality (concordant, neutral, or discordant) of each Encounter was determined by the Payoff Correlation.
Correlation Coefficient: A measure of linear relationship between two variables, ranging between -1.00 and 1.00.  A value of 0.00 indicates no linear relation between the two variables.  A value of 1.00 indicates a perfect positive relationship, being able to perfectly predict one variable from the other.  A value of -1.00 indicates a perfect negative relationship, again with absolute predictability, but by a reversed, or inverted, relationship.
Payoff Correlation: A correlation coefficient representing how Encounter cell payoffs are (linearly) related.  Correlation Coefficients range from 1.00 (perfect positive), through -1.00 (perfect negative).  A positive Correlation Coefficient means that Self-Gain and Other-Gain payoffs within each cell tend to coincide (a gain for Self-Agent usually means a gain for Other-Agent; a loss for Self-Agent usually means a loss for Other-Agent).  This would be a Concordant scenario.  A negative Correlation Coefficient means that Self-Gain and Other-Gain payoffs within each cell tended to oppose each other (a gain for Self-Agent usually means a loss for Other-Agent; a loss for Self-Agent usually means a gain for Other-Agent).  This would be a Discordant scenario.  Five Payoff Correlations were used: +1.00, +0.50, 0.00, -0.50, & -1.00.
Groups
I use the terminology above to describe ethical strategies and encounters for individual Agents.  Some of my research examined the influence of group membership on ethical strategy and outcome.  I present a few more terms appropriate for discussing these group dynamics:
Individual Self-Gain: This is the gain (or loss) sustained by an individual Agent after being taxed for the Group Treasury.  This gain/loss belongs to the individual, not to the group.  This is an attribute of the individual Agent.
Tax Rate: The proportion of an Agent’s gain (or loss) that is transferred from the Agent to the Group Treasury.  This is an attribute of the individual Agent, but may also be regarded as a group attribute when all Agents have the same Tax Rate.
Group Treasury: This consists of the cumulative contributions made to the group by its members.  At the end of an analysis, the Group Treasury, be it positive or negative, is returned to all group members in equal proportions.  This is an attribute of the group, not of an individual Agent.
Adjusted Self-Gain: This is the sum of Individual Self-Gain plus the Agent’s share of the Group Treasury.  If there are 10 Agents in the group, then an Agent’s Adjusted Self-Gain is the Individual Self-Gain plus 10% of the Group Treasury.  This is an attribute of an Individual Agent.
Total Adjusted Self-Gain: This is the sum of the Adjusted Self-Gains for all group members.  This is an attribute of the group, not of an individual Agent.