博客
关于我
强烈建议你试试无所不能的chatGPT,快点击我
最大权闭合子图(poj 2987 Firing)
阅读量:2143 次
发布时间:2019-04-30

本文共 3641 字,大约阅读时间需要 12 分钟。

Time Limit: 5000MS   Memory Limit: 131072K
Total Submissions: 10884   Accepted: 3286

Description

You’ve finally got mad at “the world’s most stupid” employees of yours and decided to do some firings. You’re now simply too mad to give response to questions like “Don’t you think it is an even more stupid decision to have signed them?”, yet calm enough to consider the potential profit and loss from firing a good portion of them. While getting rid of an employee will save your wage and bonus expenditure on him, termination of a contract before expiration costs you funds for compensation. If you fire an employee, you also fire all his underlings and the underlings of his underlings and those underlings’ underlings’ underlings… An employee may serve in several departments and his (direct or indirect) underlings in one department may be his boss in another department. Is your firing plan ready now?

Input

The input starts with two integers n (0 < n ≤ 5000) and m (0 ≤ m ≤ 60000) on the same line. Next follows n + m lines. The first n lines of these give the net profit/loss from firing the i-th employee individually bi (|bi| ≤ 107, 1 ≤ i ≤ n). The remaining m lines each contain two integers i and j (1 ≤ ij ≤ n) meaning the i-th employee has the j-th employee as his direct underling.

Output

Output two integers separated by a single space: the minimum number of employees to fire to achieve the maximum profit, and the maximum profit.

Sample Input

5 58-9-2012-101 22 51 43 44 5

Sample Output

2 2

题意:

n个点m条边的有向图,每个点都有一个权值,要在其中找到一张子图,使得所有点权值和最大,输出点数和权值和

结论:最大流:

①如果a和b连有一条边(a->b),那么a到b连接一条流量为无穷大的边

②如果a点的权值为正,那么源点到a点连接一条容量为val[a]的边

②如果a点的权值为负,那么a点到汇点连接一条容量为-val[a]的边

最大权就是权值为正的点之和-最大流

最大权闭合子图就是残余网络中与源点相连的那一部分

这样一定不会出现非法情况,因为如果a到b有条边,且a点的权值为正b点的权值为负,假设你选了a没选b,那么相当于没有割掉S到a的边,也没有割掉b到T的边,这样S到T就是联通的了所以当前肯定不是最大流

这有一篇比较好的证明:

#include
#include
#include
#include
#define LL long longusing namespace std;LL n, m, sum, cnt, S, T, val[5005], vis[5005], head[5005], h[5005], cur[5005];typedef struct{ LL to, next; LL flow;}Road;Road G[160005];void Add(LL u, LL v, LL flow){ cnt++; G[cnt].next = head[u]; head[u] = cnt; G[cnt].to = v; G[cnt].flow = flow;}int Jud(){ LL now, i; queue
q; memset(h, -1, sizeof(h)); q.push(S); h[S] = 0; while(q.empty()==0) { now = q.front(); q.pop(); for(i=head[now];i!=0;i=G[i].next) { if(G[i].flow && h[G[i].to]==-1) { h[G[i].to] = h[now]+1; q.push(G[i].to); } } } if(h[T]!=-1) return 1; return 0;}LL Sech(LL x, LL flow){ LL w, used, i; if(x==T) return flow; used = 0; for(i=cur[x];i!=0;i=G[i].next) { if(h[G[i].to]==h[x]+1) { w = Sech(G[i].to, min(flow-used, G[i].flow)); G[i].flow -= w; G[i^1].flow += w; if(G[i].flow) cur[x] = i; used += w; if(used==flow) return flow; } } if(used==0) h[x] = -1; return used;}LL Dinic(){ LL i, flow = 0; while(Jud()) { for(i=S;i<=T;i++) cur[i] = head[i]; flow += Sech(S, (LL)1<<50); } return flow;}void Sech(LL x){ int i, v; sum++; vis[x] = 1; for(i=head[x];i!=0;i=G[i].next) { v = G[i].to; if(G[i].flow>0 && vis[v]==0) Sech(v); }}int main(void){ LL n, m, i, x, y, ans; while(scanf("%lld%lld", &n, &m)!=EOF) { cnt = 1, ans = 0; S = 0, T = n+1; memset(head, 0, sizeof(head)); for(i=1;i<=n;i++) { scanf("%lld", &val[i]); if(val[i]>0) { ans += val[i]; Add(S, i, val[i]); Add(i, S, 0); } else { Add(i, T, -val[i]); Add(T, i, 0); } } for(i=1;i<=m;i++) { scanf("%lld%lld", &x, &y); Add(x, y, (LL)1<<50); Add(y, x, 0); } memset(vis, 0, sizeof(vis)); ans -= Dinic(); sum = 0; Sech(S); printf("%lld %lld\n", sum-1, ans); } return 0;}

转载地址:http://bimgf.baihongyu.com/

你可能感兴趣的文章
GAN 的 keras 实现
查看>>
AI 在 marketing 上的应用
查看>>
Logistic regression 为什么用 sigmoid ?
查看>>
Logistic Regression 为什么用极大似然函数
查看>>
SVM 的核函数选择和调参
查看>>
LightGBM 如何调参
查看>>
用 TensorFlow.js 在浏览器中训练神经网络
查看>>
cs230 深度学习 Lecture 2 编程作业: Logistic Regression with a Neural Network mindset
查看>>
梯度消失问题与如何选择激活函数
查看>>
为什么需要 Mini-batch 梯度下降,及 TensorFlow 应用举例
查看>>
为什么在优化算法中使用指数加权平均
查看>>
什么是 Q-learning
查看>>
用一个小游戏入门深度强化学习
查看>>
如何应用 BERT :Bidirectional Encoder Representations from Transformers
查看>>
5 分钟入门 Google 最强NLP模型:BERT
查看>>
强化学习第1课:像学自行车一样的强化学习
查看>>
强化学习第2课:强化学习,监督式学习,非监督式学习的区别
查看>>
强化学习第3课:有些问题就像个赌局
查看>>
强化学习第4课:这些都可以抽象为一个决策过程
查看>>
强化学习第5课:什么是马尔科夫决策过程
查看>>